The Linux Kernel Module Programming Guide is a free book; you may reproduce and/or modify it under the terms of the \href{https://opensource.org/licenses/OSL-3.0}{Open Software License}, version 3.0.
This book is distributed in the hope it will be useful, but without any warranty, without even the implied warranty of merchantability or fitness for a particular purpose.
The author encourages wide distribution of this book for personal or commercial use, provided the above copyright notice remains intact and the method adheres to the provisions of the \href{https://opensource.org/licenses/OSL-3.0}{Open Software License}.
In summary, you may copy and distribute this book free of charge or for a profit. No explicit permission is required from the author for reproduction of this book in any medium, physical or electronic.
Derivative works and translations of this document must be placed under the Open Software License, and the original copyright notice must remain intact.
If you have contributed new material to this book, you must make the material and source code available for your revisions.
If you publish or distribute this book commercially, donations, royalties, and/or printed copies are greatly appreciated by the author and the \href{https://tldp.org/}{Linux Documentation Project} (LDP).
Eventually, Peter no longer had time to follow developments with the 2.6 kernel, so Michael Burian became a co-maintainer to update the document for the 2.6 kernels.
Bob Mottram updated the examples for 3.8+ kernels.
You know C, you have written a few normal programs to run as processes, and now you want to get to where the real action is, to where a single wild pointer can wipe out your file system and a core dump means a reboot.
For the purposes of following this guide you don't necessarily need to do that.
However, it would be wise to run the examples within a test distribution running on a virtual machine in order to avoid any possibility of messing up your system.
Modules can not print to the screen like \cpp|printf()| can, but they can log information and warnings, which ends up being printed on your screen, but only on a console.
In \verb|Makefile|, \verb|$(CURDIR)| can set to the absolute pathname of the current working directory(after all \verb|-C| options are processed, if any).
See more about \verb|CURDIR| in \href{https://www.gnu.org/software/make/manual/make.html}{GNU make manual}.
If there is no \verb|PWD := $(CURDIR)| statement in Makefile, then it may not compile correctly with \verb|sudo make|.
Because some environment variables are specified by the security policy, they can't be inherited.
The default security policy is \verb|sudoers|.
In the \verb|sudoers| security policy, \verb|env_reset| is enabled by default, which restricts environment variables.
Specifically, path variables are not retained from the user environment, they are set to default values (For more information see: \href{https://www.sudo.ws/docs/man/sudoers.man/}{sudoers manual}).
You can see the environment variable settings by:
\begin{verbatim}
$ sudo -s
# sudo -V
\end{verbatim}
Here is a simple Makefile as an example to demonstrate the problem mentioned above.
\begin{code}
all:
echo $(PWD)
\end{code}
Then, we can use \verb|-p| flag to print out the environment variable values from the Makefile.
\begin{verbatim}
$ make -p | grep PWD
PWD = /home/ubuntu/temp
OLDPWD = /home/ubuntu
echo $(PWD)
\end{verbatim}
The \verb|PWD| variable won't be inherited with \verb|sudo|.
\begin{verbatim}
$ sudo make -p | grep PWD
echo $(PWD)
\end{verbatim}
However, there are three ways to solve this problem.
\begin{enumerate}
\item{
You can use the \verb|-E| flag to temporarily preserve them.
\begin{codebash}
$ sudo -E make -p | grep PWD
PWD = /home/ubuntu/temp
OLDPWD = /home/ubuntu
echo $(PWD)
\end{codebash}
}
\item{
You can set the \verb|env_reset| disabled by editing the \verb|/etc/sudoers| with root and \verb|visudo|.
\begin{code}
## sudoers file.
##
...
Defaults env_reset
## Change env_reset to !env_reset in previous line to keep all environment variables
\end{code}
Then execute \verb|env| and \verb|sudo env| individually.
Kernel modules must have at least two functions: a "start" (initialization) function called \cpp|init_module()| which is called when the module is \sh|insmod|ed into the kernel, and an "end" (cleanup) function called \cpp|cleanup_module()| which is called just before it is removed from the kernel.
You can now use whatever name you like for the start and end functions of a module, and you will learn how to do this in Section \ref{hello_n_goodbye}.
Typically, \cpp|init_module()| either registers a handler for something with the kernel, or it replaces one of the kernel functions with its own code (usually code to do something and then call the original function).
The \cpp|cleanup_module()| function is supposed to undo whatever \cpp|init_module()| did, so the module can be unloaded safely.
We needed to include \verb|<linux/kernel.h>| only for the macro expansion for the \cpp|pr_alert()| log level, which you'll learn about in Section \ref{sec:printk}.
Another thing which may not be immediately obvious to anyone getting started with kernel programming is that indentation within your code should be using \textbf{tabs} and \textbf{not spaces}.
It is one of the coding conventions of the kernel.
You may not like it, but you'll need to get used to it if you ever submit a patch upstream.
Take time to read through the available priority macros.
\item About Compiling.
Kernel modules need to be compiled a bit differently from regular userspace apps.
Former kernel versions required us to care much about these settings, which are usually stored in Makefiles.
Although hierarchically organized, many redundant settings accumulated in sublevel Makefiles and made them large and rather difficult to maintain.
Fortunately, there is a new way of doing these things, called kbuild, and the build process for external loadable modules is now fully integrated into the standard kernel build mechanism.
To learn more on how to compile modules which are not part of the official kernel (such as all the examples you will find in this guide), see file \src{Documentation/kbuild/modules.rst}.
Additional details about Makefiles for kernel modules are available in \src{Documentation/kbuild/makefiles.rst}. Be sure to read this and the related files before starting to hack Makefiles. It will probably save you lots of work.
In early kernel versions you had to use the \cpp|init_module| and \cpp|cleanup_module| functions, as in the first hello world example, but these days you can name those anything you want by using the \cpp|module_init| and \cpp|module_exit| macros.
For those not, the \verb|obj-$(CONFIG_FOO)| entries you see everywhere expand into \verb|obj-y| or \verb|obj-m|, depending on whether the \verb|CONFIG_FOO| variable has been set to \verb|y| or \verb|m|.
While we are at it, those were exactly the kind of variables that you have set in the \verb|.config| file in the top-level directory of Linux kernel source tree, the last time when you said \sh|make menuconfig| or something like that.
The \cpp|__init| macro causes the init function to be discarded and its memory freed once the init function finishes for built-in drivers, but not loadable modules.
The \cpp|__exit| macro causes the omission of the function when the module is built into the kernel, and like \cpp|__init|, has no effect for loadable modules.
Again, if you consider when the cleanup function runs, this makes complete sense; built-in drivers do not need a cleanup function, while loadable modules do.
To allow arguments to be passed to your module, declare the variables that will take the values of the command line arguments as global and then use the \cpp|module_param()| macro, (defined in \src{include/linux/moduleparam.h}) to set the mechanism up.
The \cpp|module_param()| macro takes 3 arguments: the name of the variable, its type and permissions for the corresponding file in sysfs.
Integer types can be signed as usual or unsigned. If you'd like to use arrays of integers or strings see \cpp|module_param_array()| and \cpp|module_param_string()|.
Obviously, we strongly suggest you to recompile your kernel, so that you can enable a number of useful debugging features, such as forced module unloading (\cpp|MODULE_FORCE_UNLOAD|): when this option is enabled, you can force the kernel to unload a module even when it believes it is unsafe, via a \sh|sudo rmmod -f module| command.
There are a number of cases in which you may want to load your module into a precompiled running kernel, such as the ones shipped with common Linux distributions, or a kernel you have compiled in the past.
In certain circumstances you could require to compile and insert a module into a running kernel which you are not allowed to recompile, or on a machine that you prefer not to reboot.
If you can't think of a case that will force you to use modules for a precompiled kernel you might want to skip this and treat the rest of this chapter as a big footnote.
Now, if you just install a kernel source tree, use it to compile your kernel module and you try to insert your module into the kernel, in most cases you would obtain an error as follows:
In other words, your kernel refuses to accept your module because version strings (more precisely, \textit{version magic}, see \src{include/linux/vermagic.h}) do not match.
Incidentally, version magic strings are stored in the module object in the form of a static string, starting with \cpp|vermagic:|.
Version data are inserted in your module when it is linked against the \verb|kernel/module.o| file.
To overcome this problem we could resort to the \verb|--force-vermagic| option, but this solution is potentially unsafe, and unquestionably unacceptable in production modules.
Let's focus again on the previous error message: a closer look at the version magic strings suggests that, even with two configuration files which are exactly the same, a slight difference in the version magic could be possible, and it is sufficient to prevent insertion of the module into the kernel.
That slight difference, namely the custom string which appears in the module's version magic and not in the kernel's one, is due to a modification with respect to the original, in the makefile that some distributions include.
If you do not desire to actually compile the kernel, you can interrupt the build process (CTRL-C) just after the SPLIT line, because at that time, the files you need are ready.
Now you can turn back to the directory of your module and compile it: It will be built exactly according to your current kernel settings, and it will load into it without any errors.
A program usually begins with a \cpp|main()| function, executes a bunch of instructions and terminates upon completion of those instructions.
Kernel modules work a bit differently. A module always begin with either the \cpp|init_module| or the function you specify with \cpp|module_init| call.
This is the entry function for modules; it tells the kernel what functionality the module provides and sets up the kernel to run the module's functions when they are needed.
Once it does this, entry function returns and the module does nothing until the kernel wants to do something with the code that the module provides.
Since there's more than one way to specify entry and exit functions, I will try my best to use the terms ``entry function'' and ``exit function'', but if I slip and simply refer to them as \cpp|init_module| and \cpp|cleanup_module|, I think you will know what I mean.
The definitions for these functions do not actually enter your program until the linking stage, which insures that the code (for \cpp|printf()| for example) is available, and fixes the call instruction to point to that code.
Kernel modules are different here, too. In the hello world example, you might have noticed that we used a function, \cpp|pr_info()| but did not include a standard I/O library.
One point to keep in mind is the difference between library functions and system calls. Library functions are higher level, run completely in user space and provide a more convenient interface for the programmer to the functions that do the real work --- system calls.
System calls run in kernel mode on the user's behalf and are provided by the kernel itself.
The library function \cpp|printf()| may look like a very general printing function, but all it really does is format the data into strings and write the string data using the low-level system call \cpp|write()|, which then sends the data to standard output.
\href{https://strace.io/}{strace} is a handy program that gives you details about what system calls a program is making, including which call is made, what its arguments are and what it returns.
It is an invaluable tool for figuring out things like what files a program is trying to access.
Crackers often make use of this sort of thing for backdoors or trojans, but you can write your own modules to do more benign things, like have the kernel write Tee hee, that tickles! every time someone tries to delete a file on your system.
A kernel is all about access to resources, whether the resource in question happens to be a video card, a hard drive or even memory.
Programs often compete for the same resource. As I just saved this document, updatedb started updating the locate database.
My vim session and updatedb are both using the hard drive concurrently.
The kernel needs to keep things orderly, and not give users access to resources whenever they feel like it.
To this end, a CPU can run in different modes.
Each mode gives a different level of freedom to do what you want on the system.
The Intel 80386 architecture had 4 of these modes, which were called rings. Unix uses only two rings; the highest ring (ring 0, also known as ``supervisor mode'' where everything is allowed to happen) and the lowest ring, which is called ``user mode''.
Recall the discussion about library functions vs system calls.
Typically, you use a library function in user mode.
The library function calls one or more system calls, and these system calls execute on the library function's behalf, but do so in supervisor mode since they are part of the kernel itself.
When you write a small C program, you use variables which are convenient and make sense to the reader.
If, on the other hand, you are writing routines which will be part of a bigger problem, any global variables you have are part of a community of other peoples' global variables; some of the variable names can clash.
When a program has lots of global variables which aren't meaningful enough to be distinguished, you get namespace pollution.
In large projects, effort must be made to remember reserved names, and to find ways to develop a scheme for naming unique variable names and symbols.
By convention, all kernel prefixes are lowercase. If you do not want to declare everything as static, another option is to declare a symbol table and register it with the kernel.
The file \verb|/proc/kallsyms| holds all the symbols that the kernel knows about and which are therefore accessible to your modules since they share the kernel's codespace.
Memory management is a very complicated subject and the majority of O'Reilly's \href{https://www.oreilly.com/library/view/understanding-the-linux/0596005652/}{Understanding The Linux Kernel} exclusively covers memory management!
We are not setting out to be experts on memory managements, but we do need to know a couple of facts to even begin worrying about writing real modules.
If you have not thought about what a segfault really means, you may be surprised to hear that pointers do not actually point to memory locations.
Not real ones, anyway.
When a process is created, the kernel sets aside a portion of real physical memory and hands it to the process to use for its executing code, variables, stack, heap and other things which a computer scientist would know about.
This memory begins with 0x00000000 and extends up to whatever it needs to be.
Since the memory space for any two processes do not overlap, every process that can access a memory address, say 0xbffff978, would be accessing a different location in real physical memory! The processes would be accessing an index named 0xbffff978 which points to some kind of offset into the region of memory set aside for that particular process.
For the most part, a process like our Hello, World program can't access the space of another process, although there are ways which we will talk about later.
The kernel has its own space of memory as well. Since a module is code which can be dynamically inserted and removed in the kernel (as opposed to a semi-autonomous object), it shares the kernel's codespace rather than having its own.
Therefore, if your module segfaults, the kernel segfaults.
And if you start writing over data because of an off-by-one error, then you're trampling on kernel data (or code).
This is even worse than it sounds, so try your best to be careful.
By the way, I would like to point out that the above discussion is true for any operating system which uses a monolithic kernel.
This is not quite the same thing as \emph{"building all your modules into the kernel"}, although the idea is the same.
There are things called microkernels which have modules which get their own codespace.
The \href{https://www.gnu.org/software/hurd/}{GNU Hurd} and the \href{https://fuchsia.dev/fuchsia-src/concepts/kernel}{Zircon kernel} of Google Fuchsia are two examples of a microkernel.
On Unix, each piece of hardware is represented by a file located in \verb|/dev| named a device file which provides the means to communicate with the hardware.
The minor number is used by the driver to distinguish between the various hardware it controls.
Returning to the example above, although all three devices are handled by the same driver they have unique minor numbers because the driver sees them as being different pieces of hardware.
Devices are divided into two types: character devices and block devices.
The difference is that block devices have a buffer for requests, so they can choose the best order in which to respond to the requests.
This is important in the case of storage devices, where it is faster to read or write sectors which are close to each other, rather than those which are further apart.
Another difference is that block devices can only accept input and return output in blocks (whose size can vary according to the device), whereas character devices are allowed to use as many or as few bytes as they like.
Most devices in the world are character, because they don't need this type of buffering, and they don't operate with a fixed block size.
However, when creating a device file for testing purposes, it is probably OK to place it in your working directory where you compile the kernel module.
Just be sure to put it in the right place when you're done writing the device driver.
The \cpp|file_operations| structure is defined in \src{include/linux/fs.h}, and holds pointers to functions defined by the driver that perform various operations on the device.
However, there is also a C99 way of assigning to elements of a structure, \href{https://gcc.gnu.org/onlinedocs/gcc/Designated-Inits.html}{designated initializers}, and this is definitely preferred over using the GNU extension.
You should use this syntax in case someone wants to port your driver.
The meaning is clear, and you should be aware that any member of the structure which you do not explicitly assign will be initialized to \cpp|NULL| by gcc.
An instance of \cpp|struct file_operations| containing pointers to functions that are used to implement \cpp|read|, \cpp|write|, \cpp|open|, \ldots{} system calls is commonly named \cpp|fops|.
Since Linux v3.14, the read, write and seek operations are guaranteed for thread-safe by using the \cpp|f_pos| specific lock, which makes the file position update to become the mutual exclusion.
So, we can safely implement those operations without unnecessary locking.
Since Linux v5.6, the \cpp|proc_ops| structure was introduced to replace the use of the \cpp|file_operations| structure when registering proc handlers.
Also, its name is a bit misleading; it represents an abstract open `file', not a file on a disk, which is represented by a structure named \cpp|inode|.
The major number tells you which driver handles which device file.
The minor number is used only by the driver itself to differentiate which device it is operating on, just in case the driver handles more than one device.
Where unsigned int major is the major number you want to request, \cpp|const char *name| is the name of the device as it will appear in \verb|/proc/devices| and \cpp|struct file_operations *fops| is a pointer to the \cpp|file_operations| table for your driver.
Second, the newly registered device will have an entry in \verb|/proc/devices|, and we can either make the device file by hand or write a shell script to read the file in and make the device file.
The third method is that we can have our driver make the device file using the \cpp|device_create| function after a successful registration and \cpp|device_destroy| during the call to \cpp|cleanup_module|.
However, \cpp|register_chrdev()| would occupy a range of minor numbers associated with the given major.
The recommended way to reduce waste for char device registration is using cdev interface.
The newer interface completes the char device registration in two distinct steps.
First, we should register a range of device numbers, which can be completed with \cpp|register_chrdev_region| or \cpp|alloc_chrdev_region|.
\begin{code}
int register_chrdev_region(dev_t from, unsigned count, const char *name);
int alloc_chrdev_region(dev_t *dev, unsigned baseminor, unsigned count, const char *name);
\end{code}
The choose of two different functions depend on whether you know the major numbers for your device.
Using \cpp|register_chrdev_region| if you know the device major number and \cpp|alloc_chrdev_region| if you would like to allocate a dynamicly-allocated major number.
Second, we should initialize the data structure \cpp|struct cdev| for our char device and associate it with the device numbers.
To initialize the \cpp|struct cdev|, we can achieve by the similar sequence of the following codes.
\begin{code}
struct cdev *my_dev = cdev_alloc();
my_cdev->ops = &my_fops;
\end{code}
However, the common usage pattern will embed the \cpp|struct cdev| within a device-specific structure of your own.
In this case, we'll need \cpp|cdev_init| for the initialization.
If the device file is opened by a process and then we remove the kernel module, using the file would cause a call to the memory location where the appropriate function (read/write) used to be.
If we are lucky, no other code was loaded there, and we'll get an ugly error message.
If we are unlucky, another kernel module was loaded into the same location, which means a jump into the middle of another function within the kernel.
The results of this would be impossible to predict, but they can not be very positive.
Note that you do not have to check the counter within \cpp|cleanup_module| because the check will be performed for you by the system call \cpp|sys_delete_module|, defined in \src{include/linux/syscalls.h}.
You should not use this counter directly, but there are functions defined in \src{include/linux/module.h} which let you increase, decrease and display this counter:
It is important to keep the counter accurate; if you ever do lose track of the correct usage count, you will never be able to unload the module; it's now reboot time, boys and girls.
This is bound to happen to you sooner or later during a module's development.
We do not support writing to the file (like \sh|echo "hi" > /dev/hello|), but catch these attempts and tell the user that the operation is not supported.
In the multiple-threaded environment, without any protection, concurrent access to the same memory may lead to the race condition, and will not preserve the performance.
In the kernel module, this problem may happen due to multiple instances accessing the shared resources.
Therefore, a solution is to enforce the exclusive access.
We use atomic Compare-And-Swap (CAS) to maintain the states, \cpp|CDEV_NOT_USED| and \cpp|CDEV_EXCLUSIVE_OPEN|, to determine whether the file is currently opened by someone or not.
CAS compares the contents of a memory location with the expected value and, only if they are the same, modifies the contents of that memory location to the desired value.
The system calls, which are the major interface the kernel shows to the processes, generally stay the same across versions.
A new system call may be added, but usually the old ones will behave exactly like they used to.
This is necessary for backward compatibility -- a new kernel version is not supposed to break regular processes.
In most cases, the device files will also remain the same. On the other hand, the internal interfaces within the kernel can and do change between versions.
There are differences between different kernel versions, and if you want to support multiple kernel versions, you will find yourself having to code conditional compilation directives.
Originally designed to allow easy access to information about processes (hence the name), it is now used by every bit of the kernel which has something interesting to report, such as \verb|/proc/modules| which provides the list of modules and \verb|/proc/meminfo| which gathers memory usage statistics.
The method to use the proc file system is very similar to the one used with device drivers --- a structure is created with all the information needed for the \verb|/proc| file, including pointers to any handler functions (in our case there is only one, the one called when somebody attempts to read from the \verb|/proc| file).
Then, \cpp|init_module| registers the structure with the kernel and \cpp|cleanup_module| unregisters it.
Normal file systems are located on a disk, rather than just in memory (which is where \verb|/proc| is), and in that case the index-node (inode for short) number is a pointer to a disk location where the file's inode is located.
The inode contains information about the file, for example the file's permissions, together with a pointer to the disk location or locations where the file's data can be found.
Because we don't get called when the file is opened or closed, there's nowhere for us to put \cpp|try_module_get| and \cpp|module_put| in this module, and if the file is opened and then the module is removed, there's no way to avoid the consequences.
Here a simple example showing how to use a \verb|/proc| file.
This is the HelloWorld for the \verb|/proc| filesystem.
There are three parts: create the file \verb|/proc/helloworld| in the function \cpp|init_module|, return a value (and a buffer) when the file \verb|/proc/helloworld| is read in the callback function \cpp|procfile_read|, and delete the file \verb|/proc/helloworld| in the function \cpp|cleanup_module|.
The \verb|/proc/helloworld| is created when the module is loaded with the function \cpp|proc_create|.
The return value is a \cpp|struct proc_dir_entry|, and it will be used to configure the file \verb|/proc/helloworld| (for example, the owner of this file).
In older kernels, it used \cpp|file_operations| for custom hooks in \verb|/proc| file system, but it contains some members that are unnecessary in VFS, and every time VFS expands \cpp|file_operations| set, \verb|/proc| code comes bloated.
For example, the file which never disappears in \verb|/proc| can set the \cpp|proc_flag| as \cpp|PROC_ENTRY_PERMANENT| to save 2 atomic ops, 1 allocation, 1 free in per open/read/close sequence.
But there is a little difference with read, data comes from user, so you have to import data from user space to kernel space (with \cpp|copy_from_user| or \cpp|get_user|)
The reason for \cpp|copy_from_user| or \cpp|get_user| is that Linux memory (on Intel architecture, it may be different under some other processors) is segmented.
This means that a pointer, by itself, does not reference a unique location in memory, only a location in a memory segment, and you need to know which memory segment it is to be able to use it.
There is one memory segment for the kernel, and one for each of the processes.
The only memory segment accessible to a process is its own, so when writing regular programs to run as processes, there is no need to worry about segments.
When you write a kernel module, normally you want to access the kernel memory segment, which is handled automatically by the system.
However, when the content of a memory buffer needs to be passed between the currently running process and the kernel, the kernel function receives a pointer to the memory buffer which is in the process segment.
As the buffer (in read or write function) is in kernel space, for write function you need to import data because it comes from user space, but not for the read function because data is already in kernel space.
Since every file system has to have its own functions to handle inode and file operations, there is a special structure to hold pointers to all those functions, \cpp|struct inode_operations|, which includes a pointer to \cpp|struct proc_ops|.
The difference between file and inode operations is that file operations deal with the file itself whereas inode operations deal with ways of referencing the file, such as creating links to it.
This is the mechanism we use, a \cpp|struct inode_operations| which includes a pointer to a \cpp|struct proc_ops| which includes pointers to our \cpp|procf_read| and \cpp|procfs_write| functions.
Right now it is only based on the operation and the uid of the current user (as available in current, a pointer to a structure which includes information on the currently running process), but it could be based on anything we like, such as what other processes are doing with the same file, the time of day, or the last input we received.
It is important to note that the standard roles of read and write are reversed in the kernel.
Read functions are used for output, whereas write functions are used for input.
The reason for that is that read and write refer to the user's point of view --- if a process reads something from the kernel, then the kernel needs to output it, and if a process writes something to the kernel, then the kernel receives it as input.
To read or write attributes, \cpp|show()| or \cpp|store()| method must be specified when declaring the attribute.
For the common cases \src{include/linux/sysfs.h} provides convenience macros (\cpp|__ATTR|, \cpp|__ATTR_RO|, \cpp|__ATTR_WO|, etc.) to make defining attributes easier as well as making code more concise and readable.
Device files are supposed to represent physical devices.
Most physical devices are used for output as well as input, so there has to be some mechanism for device drivers in the kernel to get the output to send to the device from processes.
This is done by opening the device file for output and writing to it, just like writing to a file.
Imagine you had a serial port connected to a modem (even if you have an internal modem, it is still implemented from the CPU's perspective as a serial port connected to a modem, so you don't have to tax your imagination too hard).
The natural thing to do would be to use the device file to write things to the modem (either modem commands or data to be sent through the phone line) and read things from the modem (either responses for commands or the data received through the phone line).
However, this leaves open the question of what to do when you need to talk to the serial port itself, for example to configure the rate at which data is sent and received.
Every device can have its own \cpp|ioctl| commands, which can be read ioctl's (to send information from a process to the kernel), write ioctl's (to return information to a process), both or neither.
Notice here the roles of read and write are reversed again, so in ioctl's read is to send information to the kernel and write is to receive information from the kernel.
The ioctl function is called with three parameters: the file descriptor of the appropriate device file, the ioctl number, and a parameter, which is of type long so you can use a cast to use it to pass anything.
You will not be able to pass a structure this way, but you will be able to pass a pointer to the structure.
This header file should then be included both by the programs which will use ioctl (so they can generate the appropriate ioctl's) and by the kernel module (so it can understand it).
If you want to use ioctls in your own kernel modules, it is best to receive an official ioctl assignment, so if you accidentally get somebody else's ioctls, or if they get yours, you'll know something is wrong.
The real process to kernel communication mechanism, the one used by all processes, is \emph{system calls}.
When a process requests a service from the kernel (such as opening a file, forking to a new process, or requesting more memory), this is the mechanism used.
If you want to change the behaviour of the kernel in interesting ways, this is the place to do it.
In general, a process is not supposed to be able to access the kernel.
It can not access kernel memory and it can't call kernel functions.
The hardware of the CPU enforces this (that is the reason why it is called ``protected mode'' or ``page protection'').
System calls are an exception to this general rule.
What happens is that the process fills the registers with the appropriate values and then calls a special instruction which jumps to a previously defined location in the kernel (of course, that location is readable by user processes, it is not writable by them).
Under Intel CPUs, this is done by means of interrupt 0x80. The hardware knows that once you jump to this location, you are no longer running in restricted user mode, but as the operating system kernel --- and therefore you're allowed to do whatever you want.
% FIXME: recent kernel changes the system call entries
The location in the kernel a process can jump to is called \verb|system_call|.
The procedure at that location checks the system call number, which tells the kernel what service the process requested.
Then it calls the function, and after it returns, does a few system checks and then return back to the process (or to a different process, if the process time ran out).
So, if we want to change the way a certain system call works, what we need to do is to write our own function to implement it (usually by adding a bit of our own code, and then calling the original function) and then change the pointer at \cpp|sys_call_table| to point to our function.
Because we might be removed later and we don't want to leave the system in an unstable state, it's important for \cpp|cleanup_module| to restore the table to its original state.
To modify the content of \cpp|sys_call_table|, we need to consider the control register.
A control register is a processor register that changes or controls the general behavior of the CPU.
For x86 architecture, the \verb|cr0| register has various control flags that modify the basic operation of the processor.
The \verb|WP| flag in \verb|cr0| stands for write protection.
Once the \verb|WP| flag is set, the processor disallows further write attempts to the read-only sections
Therefore, we must disable the \verb|WP| flag before modifying \cpp|sys_call_table|.
Since Linux v5.3, the \cpp|write_cr0| function cannot be used because of the sensitive \verb|cr0| bits pinned by the security issue, the attacker may write into CPU control registers to disable CPU protections like write protection.
As a result, we have to provide the custom assembly routine to bypass it.
However, \cpp|sys_call_table| symbol is unexported to prevent misuse.
But there have few ways to get the symbol, manual symbol lookup and \cpp|kallsyms_lookup_name|.
Here we use both depend on the kernel version.
Because of the \textit{control-flow integrity}, which is a technique to prevent the redirect execution code from the attacker, for making sure that the indirect calls go to the expected addresses and the return addresses are not changed.
Since Linux v5.7, the kernel patched the series of \textit{control-flow enforcement} (CET) for x86, and some configurations of GCC, like GCC versions 9 and 10 in Ubuntu, will add with CET (the \verb|-fcf-protection| option) in the kernel by default.
Using that GCC to compile the kernel with retpoline off may result in CET being enabled in the kernel.
You can use the following command to check out the \verb|-fcf-protection| option is enabled or not:
To guarantee the manual symbol lookup worked, we only use up to v5.4.
Unfortunately, since Linux v5.7 \cpp|kallsyms_lookup_name| is also unexported, it needs certain trick to get the address of \cpp|kallsyms_lookup_name|.
If \cpp|CONFIG_KPROBES| is enabled, we can facilitate the retrieval of function addresses by means of Kprobes to dynamically break into the specific kernel routine.
Kprobes inserts a breakpoint at the entry of function by replacing the first bytes of the probed instruction.
When a CPU hits the breakpoint, registers are stored, and the control will pass to Kprobes.
It passes the addresses of the saved registers and the Kprobe struct to the handler you defined, then executes it.
Kprobes can be registered by symbol name or address.
Within the symbol name, the address will be handled by the kernel.
Otherwise, specify the address of \cpp|sys_call_table| from \verb|/proc/kallsyms| and \verb|/boot/System.map| into \cpp|sym| parameter.
Following is the sample usage for \verb|/proc/kallsyms|:
\begin{verbatim}
$ sudo grep sys_call_table /proc/kallsyms
ffffffff82000280 R x32_sys_call_table
ffffffff820013a0 R sys_call_table
ffffffff820023e0 R ia32_sys_call_table
$ sudo insmod syscall.ko sym=0xffffffff820013a0
\end{verbatim}
Using the address from \verb|/boot/System.map|, be careful about \verb|KASLR| (Kernel Address Space Layout Randomization).
\verb|KASLR| may randomize the address of kernel code and data at every boot time, such as the static address listed in \verb|/boot/System.map| will offset by some entropy.
The purpose of \verb|KASLR| is to protect the kernel space from the attacker.
Without \verb|KASLR|, the attacker may find the target address in the fixed address easily.
Then the attacker can use return-oriented programming to insert some malicious codes to execute or receive the target data by a tampered pointer.
\verb|KASLR| mitigates these kinds of attacks because the attacker cannot immediately know the target address, but a brute-force attack can still work.
If the address of a symbol in \verb|/proc/kallsyms| is different from the address in \verb|/boot/System.map|, \verb|KASLR| is enabled with the kernel, which your system running on.
We want to ``spy'' on a certain user, and to \cpp|pr_info()| a message whenever that user opens a file.
Towards this end, we replace the system call to open a file with our own function, called \cpp|our_sys_open|.
This function checks the uid (user's id) of the current process, and if it is equal to the uid we spy on, it calls \cpp|pr_info()| to display the name of the file to be opened.
Imagine we have two kernel modules, A and B. A's open system call will be \cpp|A_open| and B's will be \cpp|B_open|.
Now, when A is inserted into the kernel, the system call is replaced with \cpp|A_open|, which will call the original \cpp|sys_open| when it is done.
Next, B is inserted into the kernel, which replaces the system call with \cpp|B_open|, which will call what it thinks is the original system call, \cpp|A_open|, when it's done.
At first glance, it appears we could solve this particular problem by checking if the system call is equal to our open function and if so not changing it at all (so that B won't change the system call when it is removed), but that will cause an even worse problem.
When A is removed, it sees that the system call was changed to \cpp|B_open| so that it is no longer pointing to \cpp|A_open|, so it will not restore it to \cpp|sys_open| before it is removed from memory.
Unfortunately, \cpp|B_open| will still try to call \cpp|A_open| which is no longer there, so that even without removing B the system would crash.
In order to keep people from doing potential harmful things \cpp|sys_call_table| is no longer exported.
This means, if you want to do something more than a mere dry run of this example, you will have to patch your current kernel in order to have \cpp|sys_call_table| exported.
What do you do when somebody asks you for something you can not do right away?
If you are a human being and you are bothered by a human being, the only thing you can say is: "\emph{Not right now, I'm busy. Go away!}".
But if you are a kernel module and you are bothered by a process, you have another possibility.
You can put the process to sleep until you can service it.
After all, processes are being put to sleep by the kernel and woken up all the time (that is the way multiple processes appear to run on the same time on a single CPU).
This function changes the status of the task (a task is the kernel data structure which holds information about a process and the system call it is in,
if any) to \cpp|TASK_INTERRUPTIBLE|, which means that the task will not run until it is woken up somehow, and adds it to WaitQ, the queue of tasks waiting to access the file.
This means that the process is still in kernel mode - as far as the process is concerned, it issued the open system call and the system call has not returned yet.
The process does not know somebody else used the CPU for most of the time between the moment it issued the call and the moment it returned.
So we will use \sh|tail -f| to keep the file open in the background, while trying to access it with another process (again in the background, so that we need not switch to a different vt).
In that case, we want to return with \cpp|-EINTR| immediately. This is important so users can, for example, kill the process before it receives the file.
There is one more point to remember. Some times processes don't want to sleep, they want either to get what they want immediately, or to be told it cannot be done.
The kernel is supposed to respond by returning with the error code \cpp|-EAGAIN| from operations which would otherwise block, such as opening the file in this example. The program \sh|cat_nonblock|, available in the \verb|examples/other| directory, can be used to open a file with \cpp|O_NONBLOCK|.
At the exit point of each thread the respective completion state is updated, and \cpp|wait_for_completion| is used by the flywheel thread to ensure that it does not begin prematurely.
So even though \cpp|flywheel_thread| is started first you should notice if you load this module and run \sh|dmesg| that turning the crank always happens first because the flywheel thread waits for it to complete.
There are other variations upon the \cpp|wait_for_completion| function, which include timeouts or being interrupted, but this basic mechanism is enough for many common situations without adding a lot of complexity.
If processes running on different CPUs or in different threads try to access the same memory, then it is possible that strange things can happen or your system can lock up.
To avoid this, various types of mutual exclusion kernel functions are available.
These indicate if a section of code is "locked" or "unlocked" so that simultaneous attempts to run it can not happen.
Because of this you should only use the spinlock mechanism around code which is likely to take no more than a few milliseconds to run and so will not noticeably slow anything down from the user's point of view.
The example here is \verb|"irq safe"| in that if interrupts happen during the lock then they will not be forgotten and will activate when the unlock happens, using the \cpp|flags| variable to retain their state.
Like the earlier spinlocks example, the one below shows an "irq safe" situation in which if other functions were triggered from irqs which might also read and write to whatever you are concerned with then they would not disrupt the logic.
As before it is a good idea to keep anything done within the lock as short as possible so that it does not hang up the system and cause users to start revolting against the tyranny of your module.
Of course, if you know for sure that there are no functions triggered by irqs which could possibly interfere with your logic then you can use the simpler \cpp|read_lock(&myrwlock)| and \cpp|read_unlock(&myrwlock)| or the corresponding write functions.
If you are doing simple arithmetic: adding, subtracting or bitwise operations, then there is another way in the multi-CPU and multi-hyperthreaded world to stop other parts of the system from messing with your mojo.
By using atomic operations you can be confident that your addition, subtraction or bit flip did actually happen and was not overwritten by some other shenanigans.
Before the C11 standard adopts the built-in atomic types, the kernel already provided a small set of atomic types by using a bunch of tricky architecture-specific codes.
Implementing the atomic types by C11 atomics may allow the kernel to throw away the architecture-specific codes and letting the kernel code be more friendly to the people who understand the standard.
But there are some problems, such as the memory model of the kernel doesn't match the model formed by the C11 atomics.
For further details, see:
\begin{itemize}
\item\href{https://www.kernel.org/doc/Documentation/atomic_t.txt}{kernel documentation of atomic types}
\item\href{https://lwn.net/Articles/691128/}{Time to move to C11 atomics?}
\item\href{https://lwn.net/Articles/698315/}{Atomic usage patterns in the kernel}
"tty" is an abbreviation of \emph{teletype}: originally a combination keyboard-printer used to communicate with a Unix system, and today an abstraction for the text stream used for a Unix program, whether it is a physical terminal, an xterm on an X display, a network connection used with ssh, etc.
In certain conditions, you may desire a simpler and more direct way to communicate to the external world.
Flashing keyboard LEDs can be such a solution: It is an immediate way to attract attention or to display a status condition.
Keyboard LEDs are present on every hardware, they are always visible, they do not need any setup, and their use is rather simple and non-intrusive, compared to writing to a tty or a file.
From v4.14 to v4.15, the timer API made a series of changes to improve memory safety.
A buffer overflow in the area of a \cpp|timer_list| structure may be able to overwrite the \cpp|function| and \cpp|data| fields, providing the attacker with a way to use return-object programming (ROP) to call arbitrary functions within the kernel.
Also, the function prototype of the callback, containing a \cpp|unsigned long| argument, will prevent work from any type checking.
Furthermore, the function prototype with \cpp|unsigned long| argument may be an obstacle to the forward-edge protection of \textit{control-flow integrity}.
Thus, it is better to use a unique prototype to separate from the cluster that takes an \cpp|unsigned long| argument.
The timer callback should be passed a pointer to the \cpp|timer_list| structure rather than an \cpp|unsigned long| argument.
Then, it wraps all the information the callback needs, including the \cpp|timer_list| structure, into a larger structure, and it can use the \cpp|container_of| macro instead of the \cpp|unsigned long| value.
While this might not sound very powerful by itself, you can patch \src{kernel/printk.c} or any other essential syscall to print ASCII characters, thus making it possible to trace virtually everything what your code does over a serial line.
If you find yourself porting the kernel to some new and former unsupported architecture, this is usually amongst the first things that should be implemented.
Logging over a netconsole might also be worth a try.
Tasklets are a quick and easy way of scheduling a single function to be run.
For example, when triggered from an interrupt, whereas work queues are more complicated but also better suited to running multiple things in a sequence.
The \cpp|tasklet_fn| function runs for a few seconds and in the mean time execution of the \cpp|example_tasklet_init| function continues to the exit point.
Although tasklet is easy to use, it comes with several defators, and developers are discussing about getting rid of tasklet in linux kernel.
The tasklet callback runs in atomic context, inside a software interrupt, meaning that it cannot sleep or access user-space data, so not all work can be done in a tasklet handler.
Also, the kernel only allows one instance of any given tasklet to be running at any given time; multiple different tasklet callbacks can run in parallel.
In recent kernels, tasklets can be replaced by workqueues, timers, or threaded interrupts.\footnote{The goal of threaded interrupts is to push more of the work to separate threads, so that the minimum needed for acknowledging an interrupt is reduced, and therefore the time spent handling the interrupt (where it can't handle any other interrupts at the same time) is reduced.
Except for the last chapter, everything we did in the kernel so far we have done as a response to a process asking for it, either by dealing with a special file, sending an \cpp|ioctl()|, or issuing a system call.
The second, called interrupts, is much harder to implement because it has to be dealt with when convenient for the hardware, not the CPU.
Hardware devices typically have a very small amount of RAM, and if you do not read their information when available, it is lost.
Under Linux, hardware interrupts are called IRQ's (Interrupt ReQuests).
There are two types of IRQ's, short and long.
A short IRQ is one which is expected to take a very short period of time, during which the rest of the machine will be blocked and no other interrupts will be handled.
A long IRQ is one which can take longer, and during which other interrupts may occur (but not interrupts from the same device).
If at all possible, it is better to declare an interrupt handler to be long.
When the CPU receives an interrupt, it stops whatever it is doing (unless it is processing a more important interrupt, in which case it will deal with this one only when the more important one is done),
saves certain parameters on the stack and calls the interrupt handler.
This means that certain things are not allowed in the interrupt handler itself, because the system is in an unknown state.
In practice IRQ handling can be a bit more complex.
Hardware is often designed in a way that chains two interrupt controllers, so that all the IRQs from interrupt controller B are cascaded to a certain IRQ from interrupt controller A.
Of course, that requires that the kernel finds out which IRQ it really was afterwards and that adds overhead. Other architectures offer some special, very low overhead, so called "fast IRQ" or FIQs.
To take advantage of them requires handlers to be written in assembler, so they do not really fit into the kernel.
They can be made to work similar to the others, but after that procedure, they are no longer any faster than "common" IRQs.
SMP enabled kernels running on systems with more than one processor need to solve another truckload of problems.
This function receives the IRQ number, the name of the function, flags, a name for \verb|/proc/interrupts| and a parameter to be passed to the interrupt handler.
The flags can include \cpp|SA_SHIRQ| to indicate you are willing to share the IRQ with other interrupt handlers (usually because a number of hardware devices sit on the same IRQ) and \cpp|SA_INTERRUPT| to indicate this is a fast interrupt.
so that instead of having the CPU waste time and battery power polling for a change in input state, it is better for the input to trigger the CPU to then run a particular handling function.
At the dawn of the internet, everybody trusted everybody completely\ldots{}but that did not work out so well.
When this guide was originally written, it was a more innocent era in which almost nobody actually gave a damn about crypto - least of all kernel developers.
The \cpp|class_attribute| structure is similar to other attribute types we talked about in section \ref{sec:sysfs}:
\begin{code}
struct class_attribute {
struct attribute attr;
ssize_t (*show)(struct class *class, struct class_attribute *attr,
char *buf);
ssize_t (*store)(struct class *class, struct class_attribute *attr,
const char *buf, size_t count);
};
\end{code}
In \verb|vinput.c|, the macro \cpp|CLASS_ATTR_WO(export/unexport)| defined in \src{include/linux/device.h} (in this case, \verb|device.h| is included in \src{include/linux/input.h}) will generate the \cpp|class_attribute| structures which are named \verb|class_attr_export/unexport|.
Then, put them into \cpp|vinput_class_attrs| array and the macro \cpp|ATTRIBUTE_GROUPS(vinput_class)| will generate the \cpp|struct attribute_group vinput_class_group| that should be assigned in \cpp|vinput_class|.
Finally, call \cpp|class_register(&vinput_class)| to create attributes in sysfs.
\section{Standardizing the interfaces: The Device Model}
\label{sec:device_model}
Up to this point we have seen all kinds of modules doing all kinds of things, but there was no consistency in their interfaces with the rest of the kernel.
To impose some consistency such that there is at minimum a standardized way to start, suspend and resume a device a device model was added.
Sometimes you might want your code to run as quickly as possible, especially if it is handling an interrupt or doing something which might cause noticeable latency.
When the \cpp|unlikely| macro is used, the compiler alters its machine instruction output, so that it continues along the false branch and only jumps if the condition is true.
You might need to do this for a short time and that is OK, but if you do not enable them afterwards, your system will be stuck and you will have to power it off.
For people seriously interested in kernel programming, I recommend \href{https://kernelnewbies.org}{kernelnewbies.org} and the \src{Documentation} subdirectory within the kernel source code which is not always easy to understand but can be a starting point for further investigation.