Книга: Writing Windows WDM Device Drivers

Using Memory

Using Memory

A device driver has to be very careful when allocating or accessing memory. However, it is not as gruesome to access memory as some other types of driver. For example, you do not need to worry about the murky x86 world of segments and selectors, as the kernel handles all this stuff.

Pool Memory

Windows implements virtual memory, where the system pretends that it has more memory than it really has. This allows more applications (and the kernel) to keep running than would otherwise be the case.

Virtual memory is implemented by breaking each application's potential address space into fixed size chunks called pages. (x86 processors have a 4KB-page size, while Alpha processors use 8KB.) A page can either be resident in physical memory, or not present and so swapped to hard disk.

Drivers can allocate memory that can be paged out, called paged memory. Alternatively, it can allocate memory that is permanently resident, called nonpaged memory. If you try to access paged memory at DISPATCH_LEVEL or above, you will cause a page fault and the kernel will crash. If you access nonresident paged memory at PASSIVE_LEVEL, the kernel will block your thread until the memory manager loads the page back into memory.

Please do not make extravagant use of nonpaged memory. However, you will find that most of the memory your driver uses will be in nonpaged memory. You must use nonpaged memory if it is going to be accessed at DISPATCH_LEVEL or above. Your driver StartIo and ISRs, for example, can access only nonpaged memory.

Table 3.4 shows how to allocate both paged and nonpaged memory using the kernel ExAllocatePool function. The table uses the DDK convention of listing IN or OUT before each parameter. IN parameters contain information that is passed to the function, and vice versa. Instances of IN and OUT in DDK header files are removed using macros.

Specify the ExAllocatePool PoolType parameter as PagedPool if you want to allocate paged memory or NonPagedPool if you want to allocate nonpaged memory. The other PoolType values are rarely used. Do not forget to check whether a NULL error value was returned by ExAllocatePool. The ExAllocatePool return type is PVOID, so you will usually need to cast the return value to the correct type.

When you are finished using the memory, you must release it using ExFreePool, passing the pointer you obtained from ExAllocatePool. Use ExFreePool for all types of memory. If you forget to free memory, it will be lost forever, as the kernel does not pick up the pieces after your driver has unloaded.

The ExAllocatePoolWithTag function associates a four-letter tag with the allocated memory. This makes it easier to analyze memory allocation in a debugger or in a crash dump.

Table 3.4 ExAllocatePool function

PVOID ExAllocatePool (IRQL<=DISPATCH_LEVEL) If at DISPATCH_LEVEL, use one of the NonPagedXxx values
Parameter Description
IN POOL_TYPE PoolType PagedPool PagePoolCacheAligned NonPagedPool NonPagedPoolMustSucceed NonPagedPoolCacheAligned NonPagedPoolCacheAlignedMustS
IN ULONG NumberOfBytes Number of bytes to allocate
Returns Pointer to allocated memory or NULL

Lookaside Lists

If your driver keeps on allocating and deallocating small amounts of pool memory then it will be inefficient and the available heap will fragment. The kernel helps in this case by providing lookaside lists[6] for fixed-size chunks of memory. A lookaside list still gets memory from the pool. However, when you free a chunk of memory, it is not necessarily returned to the pool. Instead, some chunks are kept in the lookaside list, ready to satisfy the next allocation request. The number of chunks kept is determined by the kernel memory manager.

Lookaside lists can contain either paged or nonpaged memory. Use ExInitializeNPagedLookasideList to initialize a lookaside list for nonpaged memory and ExDeleteNPaged-LookasideList to delete the lookaside list. When you want to allocate a chunk of memory, call ExAliocateFromNPagedLookasideList. To free a chunk, call ExFreeToNPaged-LookasideList. A similar set of functions is used for paged memory. Consult the DDK for full details of these functions.

Other Memory Considerations

The kernel stack is nonpaged memory for local variables[7]. There is not room for huge data structures on the kernel stack because it is only 8KB-12KB long.

Drivers need to be reentrant, so they can be called simultaneously on different processors. The use of global variables is, therefore, strongly discouraged. However, you might read in some registry settings into globals, as they are effectively constant for all the code. Local static variables should also not normally be used for the same reason.

Finally, you can reduce your driver's memory usage in other ways. Once the driver initialization routines have completed, they will not be needed again, so they can be discarded. Similarly, some routines may be put into a pageable code segment. However, routines running at DISPATCH_LEVEL or above need to be in nonpageable nondiscardable memory.

Accessing User Application Memory

There are two main techniques for accessing user data buffers. If you use Buffered I/O, the I/O Manager lets you use a nonpaged buffer visible in system memory for I/O operations. The I/O Manager copies write data into this buffer before your driver is run, and copies back read data into user space when the request has completed.

The alternative technique, Direct I/O, is preferable, as it involves less copying of data. However, it is slightly harder to use and is usually only used by DMA drivers that transfer large amounts of data. The I/O Manager passes a Memory Descriptor List (MDL) that describes the user space buffer. While a driver can make the user buffer visible in the system address space, the MDL is usually passed to the DMA handling kernel routines.

DMA

Direct Memory Access (DMA) hardware controllers perform I/O data transfers direct from a device to main memory (or vice versa) without going through the processor. The first implication is that DMA memory cannot be paged. Secondly, the DMA controller has to be programmed with a physical memory address, not a processor virtual address. The kernel provides routines to help with both these tasks.

Оглавление книги


Генерация: 0.568. Запросов К БД/Cache: 3 / 0
поделиться
Вверх Вниз