- Shared memory
- Message passing
- Pipes
- Signals
Hard for cooperating process to share information due to the independent memory space.
How can process talk?
- IPC is needed
Shared memory
Communication through read and writes to shared variables
e.g
Process 1 creates shared memory region M
Process 2 attaches memory region M to its own memory space
p1 and p2 can now communicate through memory region M
M behaves like a normal memory region
The same model is applicable to multiple process sharing the same memory region.
Advantages:
- Efficient: OS needed only to setup shared regions
- Efficient: OS needed only to setup shared regions
- Ease of use: Simple reads and writes to arbitrary data types
Disadv:
- Limited to single machine
- Limited to single machine
- Requires synchronisation, which is hard
e.g
If one process increment a counter and another process decrement counter,
Let the counter be the shared memory.
If both processes use different register,
The position of the process matters.. different position can lead to different outcomes (Race condition)
Because both process are reading and writing to shared variables
4!/2*2 = 6 possible interleavings [Permutaations]
This is due to loading must come before storing.
This is synchronisation problem
Race condition
- System behavior depends on exact interleavings
These system are incorrect
- Possible huge number of interleaving scenarios
- Some are ok but some are not
POSIX Shared Memory in *nix
- Basic steps of usage:
1. Create/locate a region M shared
1. Create/locate a region M shared
2. Attached M to process memory space
3. Read/Write from M
4. Detach M from memory after Use
5. Destroy M
- Only one process need to do this
- Only allowed if no process is attached to this
Message Passing
Process 1 prepares a message M and send it to Process 2
Process 2 receives the message M
Message sending and receiving are usually provides as syscalls
Properties:
- Naming: Identifying the other party in communication
- Synch: The behavior of the sending/receiving operation
This message is stored in the kernel memory space.
The OS is interacting with the process.
Direct communication
However, this is not efficient as evert process needs to name the other party
e.g send(p2, msg)
e.g Recieve(p1,msg)
//the reciever and sender need to specify who to reciever and send from
Indirect communication
Message are send to message storage
- Port or mailbox
E.g Send(MB,msg)
E.g Receive(MB,msg)
MB is the mail box
Characteristics:
One mailbox can be shared among many process
Two synchronization behaviors
- Non blocking primitives (Asynchronous)
The message could be buffering or the sender blocks until the receiver receives the message,
this is the point of synchronization. The sender process is stuck at the process of sending and the code will not be exe until the receiver receives the message.
This allow us to synchronise the processes
Usually recieve() is blocking
Send() Proceeds regardless
Usually recieve() is blocking
Send() Proceeds regardless
- Blocking primitives (Synchronous)
Send() blocks if matching receive() is not executed -> Sync
Last week Archi
1. For SRT, given the case where new shorter jobs enter significant time gaps, wouldnt the overhead be very significant
There is no reason for it to be significant.
Context switch can only appear upon job arrival and termination
Larger time gaps means even fewer context switching
2. How does SRT deal with starvation
It doesnt, it does suffer from starvation more so that SJF
3. Why dont we set the timer interrupt to be equal to time quantum so scheduler doesnt get invoke for nothing
Why is there a need to put time quantums as multiples of interval of timer interrupt? Should we not just make ITI = time quantum
It is up to the scheduler to decide what to do with it, it can have multiple time quanta.
Interval timer interrupts define the min time unit at which scheduler can be invoked
4. For RR, if there is only a single process, does the scheduler switch out the process and switch it back in each time quantum?
The process will stop, and the scheduler will run some code asynchronously..
It will save the context then let the scheduler run.. in this case there is nothing.
Thus it will restore the state of the register
This is a partial context switch
I am just storing and saving it.
The scheduler will not continue from where it left off.
It is invoke periodically
There is no need to save the context of the scheduler.
Routinely save the register.
No but a big deal of the context will be save and stored
5. For linux, higher priority have shorter time slice while lower priority have longer time slice
So if i have CPU intensive process and I want it to finish execution as soon as possible, should i set it to the lowest priority?
No, giving longer quantum does not means that you will get more cpu time.
This is the same as q4 as the scheduler will intercept you anyways. It does not matter.
Q6 How priority affects responsive?
Does a higher pri process get exe more freq than lower priority Yes
If not how does higher priority lead to better responsiveness just by being at the front of the queue
Responsiveness is important for interactive processes. These processes need little CPU time but when they need it, they need it immediately.
Giving them higher pri means putting them to the front of the queue and not waiting for lower priority job to finish (If preemptive)
They will be pick more often because it is always in front of the queue.
When we measure responsiveness, we are measuring that for every cpu burst, we check how long it takes to execute.
We characterised the distribution of its response times for interactive system. How long does it take to get CPU time rather than how long it take as a whole.
Non interact is how long it takes to finish it all.
Shared memory vs Message passing
Location of memory:
Shared memory uses an actual memory which is shared amongst both processes while message passage uses kernel buffers (Not shared variables)
Synchronization model: Reciever
Blocking receive
- COmmon
- Receiver mus wait for message if its not already available
Non blocking receive
- Checks if msg is avail
- if mess avail, retrieves it and moves on
- If not avail, continues without a message
Non blocking send = async Message passing
- Sender is never block
Even if the receiver has not execute the matching receive()
- System buffers the message to a certain capacity
- Receive() perform by the receiver later will be completed imm
- Async is good but it gives too much freedom to programmer and too complex
It has finite buffer size as well
The receiver will wait for the message
Message buffers
- This is not shared memory and it has finite space.
- Under OS control -> no synch needed
- No amount of buffering helps when sender is always faster thanr eciever
- User needs to declare in advance the cap of the mailbox
Synchronous message passing
- Sender has to block till reciever performs matching recieve()
- Sender has to wait till receiver is ready
- Rendezvous
We can directly drag the message from the reciever address space without need to worry about the size of the buffer as it is directly put into the address space of the receiver.In contrast with async, where the sender does not wait,
we need to buffer the message
Pros of message passing
- Applicable beyond single machine
Usually relies on shared memory
- Portable
- Easier synchronization
Disadvantages of Message passing
- Inefficient
Requires OS intervention upon every send and recieve
- Hard to use
Requires packing/unpacking data into supported message format
Unix Pipes
<Slide 19>
- One of the earliest IPC mechanism
- Communication channel with 2 ends (Write into and Read from)
Piping in shell
Unix provides the | symbol to link the input output channels of one process to another, this is known as piping
Unix pipes: IPC mech
- Shared between 2 processes
- Producer and consumer relationship
- FIFO
-
Blocking semantics
- Circular bounded byte buffer with implicit synch
- Writers wait when buffer is full
- Readers wait when buffers is empty
//Cher skip all these parts
Unix Signal
It is a way for the OS to notify the process about something using the kill function. The process will handle it.
e.g Seg fault is caused by signal
The OS will treat execption generated by hardware and generate it as a signal
- Interprocess communication
- Recipient of the signal must handle the signal by :
1. Default set of handlers
1. Default set of handlers
2. User supplied
- Common signs:
Kill,Stop, Continue, Memory error (segfault), Arithmetic error
Kill,Stop, Continue, Memory error (segfault), Arithmetic error