Consider this scenario:
You've got a computer running a very simple operating system which has a command-line interface, and the user presses a key on the keyboard:
- The keyboard generates an electronic data signal which feeds into one of the Computer's physical I/O ports (e.g. PS/2 or USB)
- The Operating system contains some device driver software which controls the physical I/O port.
- The driver receives the electronic signal and sends a message to the O/S containing that data, thereby notifying the O/S that the key has been pressed on the keyboard
- The operating system, which is responsible for remembering what the user has typed into the command-line, stores the data in memory.
- The operating system contains another bit of device driver software which controls the display device
- The operating system uses the display device driver to send a signal/message about the character to display on the screen into the display device.
Obviously that's a really simple case, but it's an example of a user pressing a key on a keyboard for a very simple command-line O/S, and the associated character being displayed on the screen. Obviously more advanced examples exist with more advanced software and more advanced devices, but the concept of the O/S using device drivers to communicate to/from hardware is pretty much the same.
Why does the operating system need device drivers? Why can't the O/S just talk to the hardware directly? Because the O/S doesn't know precisely how each specific type of hardware device from each different hardware vendor actually works; it needs a standardised way of communicating with the hardware, so that the messages it sends and receives to/from the device driver are the same regardless of the exact make and model of the device.
So the O/S can't talk to the device. But maybe the device could be designed to understand exactly how to talk to the O/S instead? Well, no, that won't work either because then the device would only ever work with one O/S - that would be pretty awful if you had to buy different devices for different operating systems, or buy a new device every time a new version of the Operating system is released. A device cannot possibly know about every possible different kind of O/S that it might ever need to work with; the only thing it should ever know about is the physical electronic connection which directly or indirectly joins it to the computer's motherboard.
So an O/S can't talk to a device, and a device can't talk to an O/S. You need an intermediary -- that's what a device driver is for; it's a bit of software which understands how to communicate with both the O/S and the device. In the simple case, it might be a pretty "dumb" pass-through which just translates and relays signals back and forward between the device and the O/S. In more complex cases it may do a whole load of processing for itself (e.g. the drivers for a modern graphics card can be really complex because users love to have lots of advanced options to change the way their graphics card works, so the device driver often has all kinds of advanced configuration options, monitoring functions, etc)
Of course, other end-user software applications can use the same device drivers to communicate with hardware too - not just the O/S.
Wherever you find two different bits of software which need to communicate with each other, the messaging system, protocols or set of software functions which allow those bits of software to communicate with each other are known as an Application Programming Interface (API). In the case of device drivers, APIs are usually much more "programmer friendly" than hardware. Working with hardware directly requires that you understand how to fuss about with individual pins and work in 1s and 0s, and different devices all have different circuitry. That's fine, but it means the programmer needs to be an expert in the underlying architecture of the hardware in order to do anything useful, and they need to take that circuitry into consideration.
Device drivers expose an API to the O/S and other bits of software, The API typically provides a much more human-friendly way of controlling the hardware which doesn't need the programmer to understand the way that different pins are used - e.g. for a graphics device you might want a function called "draw" which draws something on the screen - this is far more programmer-friendly than needing to understand that you've got to send a series of 1s and 0s down a specific pin on your graphics card.
A device driver will contain all the nasty low-level logic which knows how to work with a specific hardware device -- that logic will be hidden behind a bunch of standardised functions, so programmers writing software applications just need to use a generalised, standardised set of human-friendly functions exposed by the driver instead. Switching hardware is easy because you just use a different device driver for the different hardware, but the standardised functions remain the same.
The word which describing this idea of putting a simple standardised facade on top of a bunch of different things (which may be inherently complex and low-level) is Abstraction
. Abstraction means "generalisation" - Different devices with different circuitry can all have software functions written for them which broadly "do" the same thing - although the underlying logic controlling the hardware will be different, it makes the devices easily interchangable. Think of cars as an analogy - e.g. the driver's seat of a car is an abstraction - you can sit in a Honda, BMW, Ford or Nissan and the driver's "interface" will follow exactly the same standardised abstraction with a steering wheel, gear-shift, pedals, etc.
The conceptual idea of separating things like hardware, device drivers and application software is known as Layering
- like a cake has "layers" which separate the top and bottom - the device is at the bottom, with the driver in the middle, and the software on top (or you can turn it upside-down, it doesn't matter). A layer is not a bit of software nor a bit of hardware, it's a conceptual idea about the way humans design systems to separate out responsibilities for different things (e.g. the fact that a hardware device is not responsible for understanding an O/S, and an O/S is not responsible for understanding a hardware device).
The conceptual "layer" which overlays an abstraction on top of a hardware device is known as the Hardware Abstraction Layer
(HAL) - typically device drivers are part of the HAL if you draw it out on a system architecture diagram then your Hardware is part of another 'Layer', and your end-user software applications are another 'Layer'. It's a conceptual idea rather than a tangible thing -- it's a way of being able to visualise and explain the architecture of a computer system, by being able to identify that different "layers" which are responsible for doing different things in that system.
So -- Software (application layer) communicates with the device driver (hardware abstraction layer) which communicates with the physical hardware (physical layer)
Don't get too caught up with terms like "layer" - it's just a way of drawing analogies between computer architecture concepts which humans might not be able to visualise, and other unrelated things like Cakes or Onions, which are pretty easy for humans to visualse. It conveys the idea that a 'layer' only knows how to communicate with its immediate neighbour - e.g. so the top of the cake has no contact with the bottom of the cake because there's a layer of icing in-between.