Informative Information for the Uninformed | ||||||||||||||
|
||||||||||||||
Next: Integer Overflows in Size
Up: Differences in Relocation Processing
Previous: Writing to External Addresses
Self-updating Relocation BlocksOne of the more interesting nuisances about the way relocation fixups are processed is that it's actually possible to create a relocation block that will perform fixups against other relocation blocks. This has the effect of making it such that the relocation information that appears on disk is actually different than what is processed when relocation fixups are applied. The basic idea behind this approach is to prepend certain relocation blocks that apply fixups to subsequent relocation blocks. This all works because relocation blocks are typically processed in the order that they appear. An example of this basic concept is described shown below:
static VOID PrependSelfUpdatingRelocations( __in PPE_IMAGE Image, __in PRELOC_FUZZ_CONTEXT FuzzContext) { PRELOCATION_BLOCK_CONTEXT SelfBlock; PRELOCATION_BLOCK_CONTEXT RealBlock; ULONG RelocBaseRva; ULONG NumberOfBlocks = FuzzContext->NumberOfBlocks; ULONG Count; // // Grab the base address that relocations will be loaded at // RelocBaseRva = FuzzContext->BaseRelocationSection->VirtualAddress; // // Grab the first block before we start prepending // RealBlock = FuzzContext->NewRelocationBlocks; // // Prepend self-updating relocation blocks for each block that exists // for (Count = 0; Count < NumberOfBlocks; Count++) { PRELOCATION_BLOCK_CONTEXT RelocationBlock; RelocationBlock = AllocateRelocationBlockContext(2); PrependRelocationBlockContext( FuzzContext, RelocationBlock); } // // Walk through each self updating block, fixing up the real blocks to // account for the amount of displacement that will be added to their Rva // attributes. // for (SelfBlock = FuzzContext->NewRelocationBlocks, Count = 0; Count < NumberOfBlocks; Count++, SelfBlock = SelfBlock->Next, RealBlock = RealBlock->Next) { SelfBlock->Rva = RelocBaseRva + RealBlock->RelocOffset; // // We'll relocate the two least significant bytes of the real block's RVA // and SizeOfBlock. // SelfBlock->Fixups[0] = (USHORT)((IMAGE_REL_BASED_HIGHLOW << 12) | (((RealBlock->RelocOffset - 2) & 0xfff))); SelfBlock->Fixups[1] = (USHORT)((IMAGE_REL_BASED_HIGHLOW << 12) | (((RealBlock->RelocOffset + 2) & 0xfff))); SelfBlock->Rva &= ~(PAGE_SIZE-1); // // Account for the amount that will be added by the dynamic loader after // the first self-updating relocation blocks are processed. // *(PUSHORT)(&RealBlock->Rva) -= (USHORT)(FuzzContext->Displacement >> 16) + 2; *(PUSHORT)(&RealBlock->SizeOfBlock) -= (USHORT)(FuzzContext->Displacement >> 16) + 2; } } This test works by prepending a self-updating relocation block for each relocation block that exists in the binary. In this way, if there were two relocations blocks that already existed, two self-updating relocation blocks would be prepended, one for each of the two existing relocation blocks. Following that, the self-updating relocation blocks are populated. Each self-updating relocation block is created with two fixup descriptors. These fixup descriptors are used to apply fixups to the VirtualAddress and SizeOfBlock attributes of its corresponding existing relocation block. Since a HIGHLOW fixup only applies to two most significant bytes, the RVAs of the corresponding fields are adjusted down by two. The end result of this operation is that the first n relocation blocks are responsible for fixing up the VirtualAddress and SizeOfBlock attributes associated with subsequent relocation blocks. When relocations are processed in a linear fashion, the subsequent relocation blocks are updated in a way that allows them to be processed correctly. Running this test against the set of test applications produces the following results:
|