VirtualProtect是否需要一些权限

时间:2015-02-24 15:00:07

标签: winapi api-hook

我正在尝试实现IAT挂钩。我在一个DLL中编写了IAT部分,并在一个带有CreateRemoteThread的exe中编写了注入部分。我在注入dll之后发现,IAT部分中的VirtualProtect函数总是抛出一个ERROR_INVALID_PRAMETER,即使我传递的参数只是从VirtualQuery返回的值。我不知道发生了什么。 VirtualProtect是否需要一些我没有的权限?

以下是错误部分:

if (0 == lstrcmpA(lpApiName, (LPCSTR)pImport->Name)){

                MEMORY_BASIC_INFORMATION thunkMemInfo;
                DWORD junk;
                DWORD oldProtect;
                if (!VirtualQuery(thunk, &thunkMemInfo, sizeof(MEMORY_BASIC_INFORMATION))){
                    return GetLastError();
                }

                if (!VirtualProtect(thunkMemInfo.BaseAddress, thunkMemInfo.RegionSize, thunkMemInfo.Protect, &oldProtect)){
                   return GetLastError();  -------Here returns 87 in decimal
                }

                MessageBoxA(NULL, "aaaa", "Hooked", MB_OK);
                thunk->u1.Function = (DWORD)Callback;
                MessageBoxA(NULL, "bbbbb", "Hooked", MB_OK);

                if (!VirtualProtect(&thunk, thunkMemInfo.RegionSize, oldProtect, &junk)){
                    return 3;
                }

                return S_OK;
            }

我在C#中的注射部分是这样的:

public static void InjectDLL(IntPtr hProcess, String strDLLName, Process proc)
    {
        IntPtr bytesout;

        // Length of string containing the DLL file name +1 byte padding
        Int32 LenWrite = strDLLName.Length + 1;
        // Allocate memory within the virtual address space of the target process
        IntPtr AllocMem = (IntPtr)VirtualAllocEx(hProcess, (IntPtr)null, (uint)LenWrite, 0x1000, 0x40); //allocation pour WriteProcessMemory

        // Write DLL file name to allocated memory in target process
        WriteProcessMemory(hProcess, AllocMem, strDLLName, (UIntPtr)LenWrite, out bytesout);
        // Function pointer "Injector"
        UIntPtr Injector = (UIntPtr)GetProcAddress(GetModuleHandle("kernel32.dll"), "LoadLibraryA");

        if (Injector == null)
        {
            Console.WriteLine(" Injector Error! \n ");
            // return failed
            return;
        }

        // Create thread in target process, and store handle in hThread
        IntPtr hThread = (IntPtr)CreateRemoteThread(hProcess, (IntPtr)null, 0, Injector, AllocMem, 0, out bytesout);
        // Make sure thread handle is valid
        if (hThread == null)
        {
            //incorrect thread handle ... return failed
            Console.WriteLine(" hThread [ 1 ] Error! \n ");
            return;
        }
        // Time-out is 10 seconds...
        int Result = WaitForSingleObject(hThread, 10 * 1000);
        // Check whether thread timed out...
        if (Result == 0x00000080L || Result == 0x00000102L || Result == 0xFFFFFFFFL)
        {
            /* Thread timed out... */
            Console.WriteLine(" hThread [ 2 ] Error! \n ");
            // Make sure thread handle is valid before closing... prevents crashes.
            if (hThread != null)
            {
                //Close thread in target process
                CloseHandle(hThread);
            }
            return;
        }
        // Sleep thread for 1 second
        Thread.Sleep(1000);
        // Clear up allocated space ( Allocmem )
        VirtualFreeEx(hProcess, AllocMem, (UIntPtr)0, 0x8000);
        // Make sure thread handle is valid before closing... prevents crashes.
        if (hThread != null)
        {
            //Close thread in target process
            CloseHandle(hThread);
        }
        // return succeeded
        ResumeThread(hThread);
        System.Windows.MessageBox.Show("Inject!");
        return;
    }


Process proc = Process.GetProcessesByName(exeName)[0];
           // System.Windows.MessageBox.Show(proc.ProcessName + "Start!");
            uint dwAccl = 0x0002 | 0x0400 | 0x0008 | 0x0010 |0x0020;
            InjectDLL((IntPtr)tools.OpenProcess(dwAccl, 1, proc.Id), "Loader.dll", proc);    

1 个答案:

答案 0 :(得分:0)

[Log partition=debug-topic-1, dir=/tmp/kafka-logs] Found deletable segments with base offsets [4] due to retention time 604800000ms breach (kafka.log.Log:66) [2020-04-20 22:42:39,303] INFO [ProducerStateManager partition=debug-topic-1] Writing producer snapshot at offset 5 (kafka.log.ProducerStateManager:66) [2020-04-20 22:42:39,304] INFO [Log partition=debug-topic-1, dir=/tmp/kafka-logs] Rolled new log segment at offset 5 in 1 ms. (kafka.log.Log:66) [2020-04-20 22:42:39,304] INFO [Log partition=debug-topic-1, dir=/tmp/kafka-logs] Scheduling log segment [baseOffset 4, size 84] for deletion. (kafka.log.Log:66) [2020-04-20 22:42:39,310] ERROR Error while deleting segments for debug-topic-1 in dir /tmp/kafka-logs (kafka.server.LogDirFailureChannel:76) java.nio.file.NoSuchFileException: /tmp/kafka-logs/debug-topic-1/00000000000000000004.log at sun.nio.fs.UnixException.translateToIOException(UnixException.java:86) at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102) at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107) at sun.nio.fs.UnixCopyFile.move(UnixCopyFile.java:409) at sun.nio.fs.UnixFileSystemProvider.move(UnixFileSystemProvider.java:262) at java.nio.file.Files.move(Files.java:1395) at org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:805) at org.apache.kafka.common.record.FileRecords.renameTo(FileRecords.java:224) at kafka.log.LogSegment.changeFileSuffixes(LogSegment.scala:488) at kafka.log.Log.asyncDeleteSegment(Log.scala:1924) at kafka.log.Log.deleteSegment(Log.scala:1909) at kafka.log.Log.$anonfun$deleteSegments$3(Log.scala:1455) at kafka.log.Log.$anonfun$deleteSegments$3$adapted(Log.scala:1455) at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62) at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49) at kafka.log.Log.$anonfun$deleteSegments$2(Log.scala:1455) at scala.runtime.java8.JFunction0$mcI$sp.apply(JFunction0$mcI$sp.java:23) at kafka.log.Log.maybeHandleIOException(Log.scala:2013) at kafka.log.Log.deleteSegments(Log.scala:1446) at kafka.log.Log.deleteOldSegments(Log.scala:1441) at kafka.log.Log.deleteRetentionMsBreachedSegments(Log.scala:1519) at kafka.log.Log.deleteOldSegments(Log.scala:1509) at kafka.log.LogManager.$anonfun$cleanupLogs$3(LogManager.scala:913) at kafka.log.LogManager.$anonfun$cleanupLogs$3$adapted(LogManager.scala:910) at scala.collection.immutable.List.foreach(List.scala:392) at kafka.log.LogManager.cleanupLogs(LogManager.scala:910) at kafka.log.LogManager.$anonfun$startup$2(LogManager.scala:395) at kafka.utils.KafkaScheduler.$anonfun$schedule$2(KafkaScheduler.scala:114) at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:63) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Suppressed: java.nio.file.NoSuchFileException: /tmp/kafka-logs/debug-topic-1/00000000000000000004.log -> /tmp/kafka-logs/debug-topic-1/00000000000000000004.log.deleted at sun.nio.fs.UnixException.translateToIOException(UnixException.java:86) at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102) at sun.nio.fs.UnixCopyFile.move(UnixCopyFile.java:396) at sun.nio.fs.UnixFileSystemProvider.move(UnixFileSystemProvider.java:262) at java.nio.file.Files.move(Files.java:1395) at org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:802) ... 30 more [2020-04-20 22:42:39,311] ERROR Uncaught exception in scheduled task 'kafka-log-retention' (kafka.utils.KafkaScheduler:76) org.apache.kafka.common.errors.KafkaStorageException: Error while deleting segments for debug-topic-1 in dir /tmp/kafka-logs Caused by: java.nio.file.NoSuchFileException: /tmp/kafka-logs/debug-topic-1/00000000000000000004.log at sun.nio.fs.UnixException.translateToIOException(UnixException.java:86) at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102) at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107) at sun.nio.fs.UnixCopyFile.move(UnixCopyFile.java:409) at sun.nio.fs.UnixFileSystemProvider.move(UnixFileSystemProvider.java:262) at java.nio.file.Files.move(Files.java:1395) at org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:805) at org.apache.kafka.common.record.FileRecords.renameTo(FileRecords.java:224) at kafka.log.LogSegment.changeFileSuffixes(LogSegment.scala:488) at kafka.log.Log.asyncDeleteSegment(Log.scala:1924) at kafka.log.Log.deleteSegment(Log.scala:1909) at kafka.log.Log.$anonfun$deleteSegments$3(Log.scala:1455) at kafka.log.Log.$anonfun$deleteSegments$3$adapted(Log.scala:1455) at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62) at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49) at kafka.log.Log.$anonfun$deleteSegments$2(Log.scala:1455) at scala.runtime.java8.JFunction0$mcI$sp.apply(JFunction0$mcI$sp.java:23) at kafka.log.Log.maybeHandleIOException(Log.scala:2013) at kafka.log.Log.deleteSegments(Log.scala:1446) at kafka.log.Log.deleteOldSegments(Log.scala:1441) at kafka.log.Log.deleteRetentionMsBreachedSegments(Log.scala:1519) at kafka.log.Log.deleteOldSegments(Log.scala:1509) at kafka.log.LogManager.$anonfun$cleanupLogs$3(LogManager.scala:913) at kafka.log.LogManager.$anonfun$cleanupLogs$3$adapted(LogManager.scala:910) at scala.collection.immutable.List.foreach(List.scala:392) at kafka.log.LogManager.cleanupLogs(LogManager.scala:910) at kafka.log.LogManager.$anonfun$startup$2(LogManager.scala:395) at kafka.utils.KafkaScheduler.$anonfun$schedule$2(KafkaScheduler.scala:114) at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:63) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Suppressed: java.nio.file.NoSuchFileException: /tmp/kafka-logs/debug-topic-1/00000000000000000004.log -> /tmp/kafka-logs/debug-topic-1/00000000000000000004.log.deleted at sun.nio.fs.UnixException.translateToIOException(UnixException.java:86) at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102) at sun.nio.fs.UnixCopyFile.move(UnixCopyFile.java:396) at sun.nio.fs.UnixFileSystemProvider.move(UnixFileSystemProvider.java:262) at java.nio.file.Files.move(Files.java:1395) at org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:802) ... 30 more

第三个参数: thunkMemInfo.Protect

这不是新的内存保护常量,这是您通过VirtualQuery检索到的常量,您所做的所有事情都是应用相同的保护。

改为将其更改为PAGE_EXECUTE_READWRITE(0x40)