python process arrays

Python IPC: Sharing Arrays Across Processes with Array

Inter-Process Communication (IPC) is how different processes in a program share data with each other. In Python, processes don’t share memory space by default, so when they need to work with the same data—like numbers in a list—we need a safe way to do that.

That’s where multiprocessing.Array comes in. It creates a shared array in memory that can be accessed and changed by multiple processes at the same time. Python takes care of locking the data so everything stays in sync.

In this article, you’ll learn how to create, update, and read from a shared array between processes using only the tools needed—nothing extra. By the end, you’ll know how to build simple and clear multi-process programs that work with shared array data.

Setting Up: Imports and Initialization

To get started, we need just a few modules. We’ll use multiprocessing to create processes and shared memory, time to add some delays, and random to make the example fun and unpredictable.

Here’s a simple setup where we create a shared array to track car speeds:

from multiprocessing import Process, Array
import time
import random

# Create a shared array of 5 integers (for 5 cars)
car_speeds = Array('i', [0, 0, 0, 0, 0])

The 'i' typecode (or alternatively, you can use ctypes.c_int by importing ctypes) tells Python to store integers in the array. This creates a block of shared memory that holds integer values. The car_speeds array now lives in shared memory, meaning any process we create can read from it or write to it. In our example, each position in the array could represent the speed of a different car on a race track.

Creating and Using a Shared Array

Creating a shared array using multiprocessing.Array allows multiple processes to work on the same data in memory. This makes it easy to coordinate tasks that rely on common values, like numbers in a list. To demonstrate this, let’s build a simple traffic control system that tracks the speeds of five cars. Each car will be managed by its own process, and all speeds will be stored in a shared array.

In the example below, the parent process creates the shared array and then spawns five child processes. Each process represents a car that updates its speed randomly a few times. All processes update different positions in the same array, and when they finish, the parent reads the final speeds from that array.

from multiprocessing import Process, Array
import time
import random

# This function will be run in each process to update a car's speed
def update_speed(car_speeds, index):

    for _ in range(5):

        # Randomly change speed between 20 and 100
        new_speed = random.randint(20, 100)
        car_speeds[index] = new_speed
        print(f"Car {index + 1} speed updated to: {new_speed}")
        time.sleep(0.5)


if __name__ == "__main__":

    # Create a shared array of 5 integers (each for a car)
    car_speeds = Array('i', [0] * 5)

    # Create a process for each car to update its speed
    processes = []

    for i in range(5):
        p = Process(target=update_speed, args=(car_speeds, i))
        processes.append(p)
        p.start()

    # Wait for all processes to finish
    for p in processes:
        p.join()

    print("\nFinal car speeds:")
    for i, speed in enumerate(car_speeds):
        print(f"Car {i + 1}: {speed} km/h")

The array is created with 'i' as the typecode, meaning it stores integers. This shared memory is visible to all child processes. Each one updates only its own index in the array, so there’s no conflict. When the processes finish, the parent can print out the final state of the shared array. This simple structure makes it easy to work with shared numeric data in real-world multiprocessing tasks.

Accessing and Modifying Elements in Child Processes

When you use multiprocessing.Array, the shared array is automatically placed in shared memory, which means you can pass it to any process and access it just like a regular list. To modify an element in the array, you simply use indexing, like array[1] = 50. Every change made by a child process is visible to all others, including the parent.

Let’s look at a fun example. Imagine we have two cars on a track, and each car runs in its own process. The first car updates its speed a few times, and so does the second. Both use the same shared array to store their current speed.

from multiprocessing import Process, Array
import time
import random

def car_one(array):

    for _ in range(3):

        speed = random.randint(40, 80)
        array[0] = speed

        print(f"Car One speed: {speed}")

        time.sleep(0.5)


def car_two(array):

    for _ in range(3):

        speed = random.randint(60, 100)
        array[1] = speed

        print(f"Car Two speed: {speed}")

        time.sleep(0.5)


if __name__ == "__main__":

    car_speeds = Array('i', [0, 0])  # Two cars

    p1 = Process(target=car_one, args=(car_speeds,))
    p2 = Process(target=car_two, args=(car_speeds,))

    p1.start()
    p2.start()

    p1.join()
    p2.join()

    print("\nFinal speeds:")
    print(f"Car One: {car_speeds[0]} km/h")
    print(f"Car Two: {car_speeds[1]} km/h")

In this setup, car_one updates the first index of the array, and car_two updates the second. Since both use the same Array object, the parent process can access their final speeds after both child processes finish. This makes shared arrays very handy when you need a simple and direct way to communicate numeric data between processes.

Looping Through Shared Arrays

Shared arrays from the multiprocessing module behave a lot like regular Python lists, especially when it comes to reading values. You can loop through them using a simple for loop, either by value or by index. This makes it easy to monitor or report on the shared state across multiple processes.

Imagine we have a monitoring system that checks the speed of all cars on a track every second. It doesn’t change anything—just reads and prints the values for observation. This is helpful in many real-world tasks where you need a “read-only” observer process.

Here’s how that could look in code:

from multiprocessing import Process, Array
import time
import random

def update_car_speeds(speeds):

    for i in range(5):

        for j in range(len(speeds)):
            speeds[j] = random.randint(50, 100)

        time.sleep(1)


def monitor_speeds(speeds):

    for _ in range(5):
        print("Current speeds:", list(speeds))
        time.sleep(1)


if __name__ == "__main__":

    car_speeds = Array('i', [0, 0, 0])  # Three cars

    updater = Process(target=update_car_speeds, args=(car_speeds,))
    monitor = Process(target=monitor_speeds, args=(car_speeds,))

    updater.start()
    monitor.start()

    updater.join()
    monitor.join()

In this example, update_car_speeds randomly changes the speed of each car every second. Meanwhile, monitor_speeds loops through the array and prints the entire list of car speeds at the same interval. This shows how you can safely loop through a shared array in one process while another process is modifying it. Since the array is in shared memory, all processes always see the latest values.

Using Array with a Lock

When you create a multiprocessing.Array, it automatically includes a lock to help coordinate access between processes. This means if one process is writing to the array while another is reading, they won’t clash and cause unexpected results. The lock ensures that each operation on the array happens one at a time.

Let’s say we have one process that updates the position of vehicles on a road (represented as numbers), and another that reads and prints the positions regularly. Thanks to the default locking, the reader won’t see a half-updated array.

from multiprocessing import Process, Array
import time

def writer(shared_array):

    for i in range(5):

        for j in range(len(shared_array)):
            shared_array[j] += 1

        print(f"Writer updated array to: {list(shared_array)}")

        time.sleep(0.5)


def reader(shared_array):

    for i in range(5):
        print(f"Reader sees array: {list(shared_array)}")

        time.sleep(0.5)


if __name__ == "__main__":

    road_positions = Array('i', [0, 0, 0, 0])  # Four vehicle positions

    p1 = Process(target=writer, args=(road_positions,))
    p2 = Process(target=reader, args=(road_positions,))

    p1.start()
    p2.start()

    p1.join()
    p2.join()

In this code, the writer process updates each vehicle’s position in the array every half second. The reader process prints the positions at the same interval. Because the array includes a lock by default, these actions don’t interfere with each other. This helps keep the data clean and synchronized between reading and writing.

Fun Example: Robot Swarm Position Tracker

Imagine a swarm of robots moving around on a grid. Each robot’s position is tracked by its x and y coordinates. We can use a shared Array to store all these positions in one place, with each robot assigned a slice of the array to update its coordinates.

In this example, each robot runs in its own process and updates its position multiple times. The parent process then prints the final positions of all robots at the end.

from multiprocessing import Process, Array
import time
import random

def move_robot(robot_id, positions):

    # Each robot updates its x and y position (2 slots per robot)
    start = robot_id * 2

    for _ in range(5):

        positions[start] = random.randint(0, 100)   # x-coordinate
        positions[start + 1] = random.randint(0, 100)  # y-coordinate

        print(f"Robot {robot_id} moved to ({positions[start]}, {positions[start + 1]})")

        time.sleep(0.3)


if __name__ == "__main__":

    num_robots = 3
    # Shared array holds x,y for each robot: length = num_robots * 2
    robot_positions = Array('i', num_robots * 2)

    processes = []

    for robot in range(num_robots):
        p = Process(target=move_robot, args=(robot, robot_positions))
        processes.append(p)
        p.start()

    for p in processes:
        p.join()

    print("\nFinal robot positions:")

    for i in range(num_robots):
        x = robot_positions[i * 2]
        y = robot_positions[i * 2 + 1]
        print(f"Robot {i}: ({x}, {y})")

Here, each robot updates its own portion of the shared array safely and independently. The parent process reads all robot positions after the updates. This example shows how shared arrays can coordinate multiple processes working on related data like coordinates in a simple and organized way.

Conclusion

In this article, you learned how to use multiprocessing.Array to share array data safely between multiple processes. We covered creating shared arrays, accessing and modifying their elements in child processes, looping through arrays, and how the built-in locking keeps data consistent when reading and writing simultaneously. The fun robot swarm example showed how multiple processes can update different parts of the array independently but still share one common memory space.

Now, you can try using shared arrays for tasks like tracking sensor grids, managing simulation data, or any scenario where parallel processes need to read and update list-like data together. It’s a powerful tool for simple and effective inter-process communication in Python.

Scroll to Top