Python Process Synchronization: Managing Multiple Processes

Python is great at doing many things at once, especially when it comes to handling multiple processes. With the multiprocessing module, you can run different parts of your program at the same time — like having different animals in a zoo each doing their own thing, without waiting for one another.

But here’s the catch: when these processes share the same food bowl (or memory), they can get in each other’s way. Imagine two dogs trying to eat from the same bowl — one might knock it over while the other is still chewing.

This is where process synchronization comes in. It helps you manage how and when each process can access shared resources. It’s like giving each animal a turn or making them wait for a whistle before they start.

In this article, we’ll walk through how to:

  • Share memory between processes safely
  • Use locks, semaphores, and events to control access
  • Make sure your processes stay organized and don’t step on each other’s paws

Let’s dive in and learn how to keep your Python processes playing nice together.

Shared Memory Between Processes

In Python, when you create multiple processes, each one gets its own memory space. That means they don’t automatically share variables or data with each other. To make them work together — like animals in a team — we need to use special tools from the multiprocessing module: Value and Array.

Using Value for Single Shared Variables

Value lets you create a single variable in memory that all processes can use. Think of it like one food bowl both dogs share — and they must be careful not to knock it over while eating.

Here’s how two dogs (two processes) take turns adding food to the same bowl. We use a lock built into Value to prevent both dogs from scooping food at the same time.

from multiprocessing import Process, Value
import time

def eat_food(shared_food, dog_name):

    for _ in range(3):

        with shared_food.get_lock():  # Lock for safe updating
            shared_food.value += 1
            print(f"{dog_name} added food. Total now: {shared_food.value}")

        time.sleep(0.1)


if __name__ == "__main__":

    bowl = Value('i', 0)  # 'i' means it's an integer

    dog1 = Process(target=eat_food, args=(bowl, "Dog1"))
    dog2 = Process(target=eat_food, args=(bowl, "Dog2"))

    dog1.start()
    dog2.start()

    dog1.join()
    dog2.join()

    print(f"Final food count in bowl: {bowl.value}")

In the example above, we use Value('i', 0) to create a shared integer. The 'i' tells Python that the type of value we’re sharing is an integer, and we start it at zero. Each dog process runs the eat_food() function, which increases the shared value three times. We use shared_food.get_lock() to make sure that only one process can change the value at a time. This avoids any possible data clashes. After both dogs are done, the shared food count should be six, since each added three pieces.

Using Array for Shared Lists

What if we want to share more than just a number? That’s where Array comes in. It’s like a list that all your animal processes can read from and write to.

Let’s say we have three cats and one shared list. Each cat picks a nap spot and writes it into the shared array.

from multiprocessing import Process, Array
import time

def pick_nap_spot(spots, index, name, location):

    with spots.get_lock():

        encoded = location.ljust(20)[:20].encode('utf-8')  # pad or trim to 20 bytes

        for j in range(20):
            spots[index * 20 + j] = encoded[j]

        print(f"{name} chose nap spot: {location}")

    time.sleep(0.1)


def get_spot(spots, index):
    raw = bytes(spots[index * 20:(index + 1) * 20])
    return raw.decode('utf-8').strip()


if __name__ == "__main__":

    # Create a byte array for 3 spots, each 20 bytes
    spots = Array('B', 60)  # 'B' = unsigned char (byte)

    cat_names = ["Whiskers", "Mittens", "Shadow"]
    nap_locations = ["Sunny Window", "Bookshelf", "Sofa Corner"]

    processes = []

    for i in range(3):
        p = Process(target=pick_nap_spot, args=(spots, i, cat_names[i], nap_locations[i]))
        processes.append(p)
        p.start()

    for p in processes:
        p.join()

    print("\nFinal Nap Spots:")

    for i in range(3):
        print(f"{cat_names[i]}{get_spot(spots, i)}")

In this example, we use a shared array of 60 bytes, which gives us space for three strings of 20 characters each. The array is created with the 'B' type code, which stands for unsigned bytes. Each cat is given a 20-byte slice of the array to write its favorite nap spot. Before writing, the nap spot string is padded or trimmed to exactly 20 characters and then encoded into bytes. We manually copy each byte into the correct section of the shared array. To safely read the results later, we use a helper function called get_spot(), which slices out 20 bytes and decodes them back into a readable string. We also use get_lock() to make sure only one cat is writing to the array at a time. Once all the cats are done, we print out their chosen nap spots from the shared memory.

Locking for Data Safety

When multiple processes try to change the same data at the same time, things can go wrong. This is called a race condition — it’s like two squirrels racing to the same acorn stash and knocking things over. To prevent that, Python gives us the Lock object from the multiprocessing module. It works like a gatekeeper: only one process can enter the critical section at a time.

Using Lock to Avoid Race Conditions

Let’s look at what happens when two squirrels try to store acorns into the same basket — first without a lock, and then with a lock.

Without a Lock

In this first version, both squirrels take turns adding acorns to a shared basket. But there’s no lock, so if both try to update the basket at the same time, things can get messy.

from multiprocessing import Process, Value
import time

def store_acorns(basket, name):

    for _ in range(5):

        current = basket.value

        time.sleep(0.01)  # Simulate delay

        basket.value = current + 1
        print(f"{name} stored an acorn. Total now: {basket.value}")

if __name__ == "__main__":

    acorn_basket = Value('i', 0)

    squirrel1 = Process(target=store_acorns, args=(acorn_basket, "Squirrel1"))
    squirrel2 = Process(target=store_acorns, args=(acorn_basket, "Squirrel2"))

    squirrel1.start()
    squirrel2.start()

    squirrel1.join()
    squirrel2.join()

    print(f"\nFinal acorn count: {acorn_basket.value}")

In this version, both squirrels are updating the same shared value without any lock. Because they read and write at the same time, their changes might overwrite each other. This can lead to a total count that’s less than expected. For example, even though each squirrel stores five acorns, the final count might be something like 8 instead of 10.

With a Lock

Now, we introduce a lock. This time, each squirrel must wait its turn before it can add to the basket. The lock makes sure only one can access the basket at a time.

from multiprocessing import Process, Value, Lock
import time

def store_acorns_safely(basket, lock, name):

    for _ in range(5):

        with lock:
            current = basket.value

            time.sleep(0.01)  # Simulate delay

            basket.value = current + 1
            print(f"{name} stored an acorn. Total now: {basket.value}")


if __name__ == "__main__":

    acorn_basket = Value('i', 0)
    lock = Lock()

    squirrel1 = Process(target=store_acorns_safely, args=(acorn_basket, lock, "Squirrel1"))
    squirrel2 = Process(target=store_acorns_safely, args=(acorn_basket, lock, "Squirrel2"))

    squirrel1.start()
    squirrel2.start()

    squirrel1.join()
    squirrel2.join()

    print(f"\nFinal acorn count (with lock): {acorn_basket.value}")

In this version, we use a Lock to protect the shared value. Only one squirrel can enter the locked section at a time, which guarantees that the read-update-write steps happen safely. As a result, the final acorn count will always be correct — 10, with no missing updates.

Synchronizing Access with Semaphore

Sometimes, you have many processes but only a few resources to share. Imagine five monkeys but only two swings in the playground. If all five try to swing at once, some will have to wait. This is where a Semaphore helps. It controls how many processes can access a resource at the same time.

A Semaphore is like a counter that lets a fixed number of processes enter a critical section. When the count is zero, others wait until someone leaves and increases the count again.

Here, five monkeys want to play on just two swings. The semaphore is set to 2, so only two monkeys can swing at the same time. Others wait patiently.

from multiprocessing import Process, Semaphore
import time
import random

def monkey_swing(semaphore, name):

    print(f"{name} wants to swing.")

    with semaphore:  # Wait to get permission (acquire semaphore)

        print(f"{name} is swinging!")

        time.sleep(random.uniform(0.5, 1.5)) # Swing for a bit

        print(f"{name} is done swinging and leaves the swing.")


if __name__ == "__main__":

    swings = Semaphore(2)  # Only 2 swings available

    monkey_names = ["George", "Fred", "Percy", "Harry", "Ron"]
    processes = []

    for name in monkey_names:
        p = Process(target=monkey_swing, args=(swings, name))
        processes.append(p)
        p.start()

    for p in processes:
        p.join()

    print("\nAll monkeys have finished swinging.")

In this example, the Semaphore(2) means at most two monkeys can swing at once. Each monkey tries to acquire the semaphore before swinging. If both swings are taken, others wait until a swing is free. This way, the playground stays orderly, and monkeys take turns nicely.

Signaling Between Processes with Event

When processes need to wait for a signal before doing something, Event is perfect. It’s like a flag all processes watch. When the flag is raised, everyone knows it’s time to act.

Imagine a group of friends ready to watch a movie. They’re waiting for popcorn to be ready before pressing play. The popcorn ready signal is an Event that tells them to start watching.

from multiprocessing import Process, Event
import time

def friend(name, popcorn_ready):

    print(f"{name} is waiting for popcorn...")

    popcorn_ready.wait()  # Wait until popcorn is ready

    print(f"{name} started watching the movie!")



def make_popcorn(popcorn_ready):

    print("Making popcorn...")

    time.sleep(3)  # Simulate popcorn making time

    print("Popcorn is ready!")

    popcorn_ready.set()  # Signal that popcorn is ready


if __name__ == "__main__":

    popcorn_ready = Event()

    friends = ["Harry", "Hermione", "Ron"]
    processes = []

    for friend_name in friends:
        p = Process(target=friend, args=(friend_name, popcorn_ready))
        processes.append(p)
        p.start()

    popcorn_maker = Process(target=make_popcorn, args=(popcorn_ready,))
    popcorn_maker.start()

    popcorn_maker.join()

    for p in processes:
        p.join()

    print("\nMovie night started for all friends!")

In this example, each friend process waits for the popcorn ready event before starting the movie. The popcorn maker process simulates making popcorn and then signals that it’s ready. Once the event is set, all waiting friends start watching together.

Using Manager for More Complex Shared Data

Sometimes processes need to share more complex data than simple numbers or arrays. Python’s multiprocessing.Manager helps by creating shared objects like lists and dictionaries that multiple processes can access and modify safely.

Imagine a zoo keeper keeping a log of animal feeding times. Several workers can add notes and read the log at the same time, thanks to the manager.

from multiprocessing import Process, Manager
import time

def add_feed_log(log, worker_name, animal, time_fed):

    print(f"{worker_name} adding log for {animal}")
    log.append(f"{animal} fed at {time_fed}")
    time.sleep(0.5)


def read_logs(log):

    print("Current feed logs:")

    for entry in log:
        print(f"  - {entry}")


if __name__ == "__main__":

    with Manager() as manager:

        feed_log = manager.list()  # Shared list for feed logs

        workers = [
            Process(target=add_feed_log, args=(feed_log, "Worker1", "Lion", "08:00")),
            Process(target=add_feed_log, args=(feed_log, "Worker2", "Elephant", "08:15")),
            Process(target=add_feed_log, args=(feed_log, "Worker3", "Giraffe", "08:30")),
        ]

        for w in workers:
            w.start()

        for w in workers:
            w.join()

        read_logs(feed_log)

In this example, a shared list managed by Manager stores feeding logs. Each worker process adds an entry to the log. After all workers finish, the main process reads and prints all feed logs. This way, even complex data structures like lists stay synchronized across processes.

Combining Synchronization Tools

Real-world tasks often need several synchronization tools working together. Imagine a farm where animals wait for a bell to ring before going to eat. Once the bell rings, animals update the shared feeding records. To keep things safe, a Lock controls access while the shared data is managed by a Manager. The Event signals when feeding time starts.

from multiprocessing import Process, Manager, Lock, Event
import time
import random

def animal_process(name, feeding_bell, feed_log, lock):

    print(f"{name} is waiting for the feeding bell.")

    feeding_bell.wait()  # Wait for bell signal

    time.sleep(random.uniform(0.1, 0.5))  # Animal takes time to eat

    with lock:
        feed_log.append(f"{name} ate at {time.strftime('%H:%M:%S')}")
        print(f"{name} updated feeding record.")


def ring_bell(feeding_bell):
    print("Feeding bell will ring in 2 seconds...")
    time.sleep(2)
    print("Bell rings! Feeding time starts!")
    feeding_bell.set()  # Signal all animals

if __name__ == "__main__":

    with Manager() as manager:

        feed_log = manager.list()
        lock = Lock()
        feeding_bell = Event()

        animals = ["Amber", "Brian", "Cherish", "Daisy", "Echo"]
        processes = []

        for animal in animals:
            p = Process(target=animal_process, args=(animal, feeding_bell, feed_log, lock))
            processes.append(p)
            p.start()

        bell_process = Process(target=ring_bell, args=(feeding_bell,))

        bell_process.start()
        bell_process.join()

        for p in processes:
            p.join()

        print("\nFinal feeding records:")

        for record in feed_log:
            print(f" - {record}")

Here, each animal waits for the feeding_bell event before starting to eat. Once the bell rings, animals compete to update the shared feeding log. The Lock ensures that only one animal writes to the log at a time, keeping the records safe. The Manager makes sure the list is shared properly among processes.

Running and Joining Processes

When working with multiple processes, it’s important to start each process so it runs, and then join them to wait until they all finish. This ensures your program runs in order and nothing gets left behind.

Starting processes lets them run in parallel, while joining makes the main program wait for all to complete before moving on. This is key for synchronization and clean program exit.

from multiprocessing import Process, Lock, Event, Manager
import time

def worker(name, event, lock, shared_list):

    print(f"{name} waiting for the event to start.")
    event.wait()  # Wait for signal to start

    with lock:
        shared_list.append(f"{name} started at {time.strftime('%H:%M:%S')}")
        print(f"{name} added entry to the list.")


def main():

    with Manager() as manager:

        lock = Lock()
        start_event = Event()
        shared_list = manager.list()

        names = ["Albus", "Molly", "Sirius", "Lois"]

        processes = []

        for name in names:
            p = Process(target=worker, args=(name, start_event, lock, shared_list))
            processes.append(p)
            p.start()

        print("Main process doing setup...")
        time.sleep(2)
        print("Main process signaling workers to start.")
        start_event.set()  # Signal all workers

        for p in processes:
            p.join()

        print("\nAll workers finished. Shared list contents:")
        for entry in shared_list:
            print(f" - {entry}")

if __name__ == "__main__":
    main()

This example shows multiple processes waiting for an event to start together. Each uses a lock to safely add its entry to a shared list. The main program waits for all workers to finish by joining each process. This way, everything stays orderly and synchronized.

Conclusion

In this article, we explored how Python’s multiprocessing module lets you manage multiple processes using synchronization tools. We looked at how Value and Array allow basic data sharing, how Lock and Semaphore help control access to resources, and how Event lets processes wait for a signal. We also saw how Manager supports shared lists and dictionaries, making it easy to manage more complex data. Finally, we combined these tools in fun, real-world scenarios—from zoos and farms to movie nights and monkey playgrounds.

Each synchronization tool has its role, and when combined thoughtfully, they can help you build powerful multi-process programs. Try mixing and matching them in your own small simulations. Whether it’s animals, robots, or anything else, Python makes process coordination not only possible—but fun.