OPERATING SYSTEMS
DECEMBER 2004
36. Semaphores are used to :
(A) Synchronise critical resources to prevent
deadlock
(B) Synchronise critical resources to prevent
contention
(C) Do I/O
(D) Facilitate memory management
Ans
:- A
Explanation
:-
In order to understand semaphores, we
should first understand the problem which it is tryig to solve. It is called
“The critical-section problem”. What is “Critical-Section Problem”?. Consider a
system consisting of n processes. Each process has a segment of code, called a
critical section, in which the process may be changing common variables,
updating a table, writing a file etc. The important thing is that, when one
process is executing in its critical section, no other process is to be allowed
to execute in its critical section. Thus the execution of critical sections by
the processes is mutually exclusive in time.
The critical section problem is to
design a protocol that the processes can use to cooperate. To overcome this
difficulty, we can use a synchronization tool called a semaphore.
37. In which of the following storage
replacement strategies, is a program placed in the largest available hole in
the memory?
(A) Best
fit
(B) First
fit
(C) Worst
fit
(D) Buddy
First-fit,
best-fit and worst-fit are the most common strategies used to select a free
hole from the set of available holes.
First-fit
: allocate the first hole that is big enough.
Best-fit
: Allocate the smallest hole that is big enough.
Worst-fit : Allocate the largest hole.
Ans :- C
38. Remote computing systems involves
the use of timesharing systems and :
(A) Real
time processing
(B) Batch
processing
(C) Multiprocessing
(D) All
of the above
Ans:- B
39. Non modifiable procedures are called
(A) Serially
usable procedures
(B) Concurrent
procedures
(C) Reentrant
procedures
(D) Topdown
procedures
Ans :- C
Explanation
: -
We
would have guessed that it comes under the “Memory management” topic. Non
modifiable procedures are referred to when a discussion on advantages of paging
is done. One important advantage of paging is the possibility of sharing common
code. This is particularly important in a time-sharing environment. Let us
understand this with an example.
Consider a system that supports 40
users, each of whom executes a text editor. If the text editor consists of 150K
of code and 50K of data space, we would need 8000K to support the 40 users. If
the code is reentrant, however, it can be shared. However, each process will
have its own data page.
Reentrant code also called pure code is
non-self-modifying code. If the code is reentrant, then it never changes during
execution. So, the correct answer for the above question is non modifiable
procedures are called reentrant procedures.
40. Match the following:
(a) Disk scheduling
|
(1) Round robin
|
(b) Batch processing
|
(2) Scan
|
(C)Time sharing
|
(3) LIFO
|
(d) Interrupt processing
|
(4) FIFO
|
(A) a-3,b-4,c-2,d-1
(B) a-4,b-2,c-2,d-1
(C) a-2,b-4,c-1,d-3
(D) a-3,b-4,c-1,d-2
Ans:-
C
Explanation
:-
This is a easy question. You can first
choose the obvious ones. For example, Time sharing is associated with round
robin. So, C-1. But there are two options where C is matched with 1. So, let us
look at other alternatives. Scan is one of the disk scheduling algorithm and
so, we can match a with 2. So, batch processing matches to 4 and the last one
matches d with 3. So, the correct answer is C.
JUNE
2005
36.Moving Process from main memory to disk is called :
(A)
|
Caching
|
(B)
|
Termination
|
(C)
|
Swapping
|
(D)
|
Interruption
|
Ans :- C
Explanation:-
Moving
process from main memory to disk and vice versa is called swapping. Swapping is one of the memory management techniques used by the
operating system. Since the size of the RAM is limited and finite, all the
processes or programs to be executed cannot be made to fit in it. So the disk
is also treated as an extension of the memory and is referred to as virtual
memory. Moving a process from main memory to disk is called swapping.
37.The principle of Locality of reference justifies
the use of :
(A)
|
Virtual memory
|
(B)
|
Interrupts
|
(C)
|
Cache memory
|
(D)
|
Secondary memory
|
Ans :- C
In computer science, locality of reference, also called
the principle of locality, is the term applied to situations where the
same value or related storage locations are frequently accessed. There are
three basic types of locality of reference: temporal, spatial and sequential:
Temporal locality
Here a resource that is referenced at one point in time is referenced again soon afterwards.
Spatial locality
Here the likelihood of referencing a storage location is greater if a storage location near it has been recently referenced.
Sequential locality
Here storage is accessed sequentially, in descending or ascending order.
The reason locality occurs is often because of the manner in which computer programs are created. Generally, data that are related are stored in consecutive locations in storage. One common pattern in computing is that processing is performed on a single item and then the next. This means that if a lot of processing is done, the single item will be accessed more than once, thus leading to temporal locality of reference. Furthermore, moving to the next item implies that the next item will be read, hence spatial locality of reference, since memory locations are typically read in batches.
Locality often occurs because code contains loops that tend to reference arrays or other data structures by indices.
Increasing and exploiting locality of reference are common techniques for optimization. This can happen on several levels of the memory hierarchy. Paging obviously benefits from spatial locality. A cache is a simple example of exploiting temporal locality, because it is a specially designed faster but smaller memory area, generally used to keep recently referenced data and data near recently referenced data, which can lead to potential performance increases. Data in cache does not necessarily correspond to data that is spatially close in main memory; however, data elements are brought into cache one cache line at a time. This means that spatial locality is again important: if one element is referenced, a few neighbouring elements will also be brought into cache. Finally, temporal locality plays a role on the lowest level, since results that are referenced very closely together can be kept in the machine registers. Programming languages such as C allow the programmer to suggest that certain variables are kept in registers.
Temporal locality
Here a resource that is referenced at one point in time is referenced again soon afterwards.
Spatial locality
Here the likelihood of referencing a storage location is greater if a storage location near it has been recently referenced.
Sequential locality
Here storage is accessed sequentially, in descending or ascending order.
The reason locality occurs is often because of the manner in which computer programs are created. Generally, data that are related are stored in consecutive locations in storage. One common pattern in computing is that processing is performed on a single item and then the next. This means that if a lot of processing is done, the single item will be accessed more than once, thus leading to temporal locality of reference. Furthermore, moving to the next item implies that the next item will be read, hence spatial locality of reference, since memory locations are typically read in batches.
Locality often occurs because code contains loops that tend to reference arrays or other data structures by indices.
Increasing and exploiting locality of reference are common techniques for optimization. This can happen on several levels of the memory hierarchy. Paging obviously benefits from spatial locality. A cache is a simple example of exploiting temporal locality, because it is a specially designed faster but smaller memory area, generally used to keep recently referenced data and data near recently referenced data, which can lead to potential performance increases. Data in cache does not necessarily correspond to data that is spatially close in main memory; however, data elements are brought into cache one cache line at a time. This means that spatial locality is again important: if one element is referenced, a few neighbouring elements will also be brought into cache. Finally, temporal locality plays a role on the lowest level, since results that are referenced very closely together can be kept in the machine registers. Programming languages such as C allow the programmer to suggest that certain variables are kept in registers.
38.Banker’s algorithm is for.
(A)
|
Dead lock Prevention
|
(B)
|
Dead lock Avoidance
|
(C)
|
Dead lock Detection
|
(D)
|
Dead lock creation
|
Ans :- B
Explanation :- Banker’s
algorithm is used for dead lock avoidance.
39.Which is the correct definition of a valid process
transition in an operating system ?
(A)Wake up : Ready Running
(B)Dispatch : Ready Running
(C)Block : Ready Running
(D)Timer run out : Ready Blocked
Ans
:- C
Mam thanks to provide explanations according to topic wise. It's became easy to identify questions of which topic and from which lessons. Thank u madam. And madam plz explain jan 2017 paper.
ReplyDeleteuseful material...............thank u alot
ReplyDeleteuseful material thanks a lot, the way you are explained is very easy to understand. thanks a lot once again.
ReplyDeleteGood material and helpful
ReplyDeleteGood material and thanks for explanation
ReplyDeleteTq
ReplyDelete
ReplyDeleteThank you for the helpful post. I found your blog with Google and I will start following. Hope to see new blogs soon.Check it out.. How to Rectify the Canon Printer Error Code E14 and E15
I am is grateful for the article. I am really looking forward to reading more. Fantastic. See some sample: How To Sort Out The Canon Printer Error Code 099 And P03
ReplyDelete