写入mmaped数据时出现总线错误

时间:2016-10-09 01:22:06

标签: c shared-memory mmap bus-error

当我上学期第一次完成这个项目时,代码工作正常。现在,当正在写入要在进程间共享的mmapped内存时出现总线错误,我不知道为什么它不再起作用。

Account_Info *mapData()
{
    int fd;
    //open/create file with read and write permission and check return value
    if ((fd = open("accounts", O_RDWR|O_CREAT, 0644)) == -1)
    {
            perror("Unable to open account list file.");
            exit(0);
    }

    //map data to be shared with different processes
    Account_Info *accounts = mmap((void*)0, (size_t) 100*(sizeof(Account_Info)), PROT_WRITE,
    MAP_SHARED, fd, 0);

    int count= 0;

    //loop to initialize values of Account_Info struct
    while (count != 20)
    {
            //bus error occurs here
            accounts[count].CurrBalance= 0;
            accounts[count].flag = 0;
            int i = 0;
            while (i != 100)
            {
                    //place NULL terminator into each element of AccName
                    accounts[count].AccName[i]= '\0';
                    i++;
            }

            count++;
    }

    close(fd);
    return accounts;
}

2 个答案:

答案 0 :(得分:2)

带有mmap的SIGBUS的记录原因是

  

尝试访问与文件不对应的缓冲区的一部分(例如,超出文件末尾,包括另一个进程截断文件的情况)。

我的猜测是accounts文件不存在,因此openO_CREAT创建了它。但它的大小为零,因此任何读取或写入映射的尝试都会出错。您需要使用足够的零(或其他内容)填充文件以覆盖映射,例如使用ftruncate

答案 1 :(得分:0)

如果您尝试写过文件的映射区域,您将获得Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties 16/10/08 20:03:03 INFO SparkContext: Running Spark version 1.6.1 16/10/08 20:03:03 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 16/10/08 20:03:03 INFO SecurityManager: Changing view acls to: root 16/10/08 20:03:03 INFO SecurityManager: Changing modify acls to: root 16/10/08 20:03:03 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); users with modify permissions: Set(root) 16/10/08 20:03:04 INFO Utils: Successfully started service 'sparkDriver' on port 35920. 16/10/08 20:03:04 INFO Slf4jLogger: Slf4jLogger started 16/10/08 20:03:04 INFO Remoting: Starting remoting 16/10/08 20:03:04 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriverActorSystem@127.0.0.127:36246] 16/10/08 20:03:04 INFO Utils: Successfully started service 'sparkDriverActorSystem' on port 36246. 16/10/08 20:03:04 INFO SparkEnv: Registering MapOutputTracker 16/10/08 20:03:04 INFO SparkEnv: Registering BlockManagerMaster 16/10/08 20:03:04 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-0ed2efc4-9df9-4099-9a4b-efa69e8b40f3 16/10/08 20:03:04 INFO MemoryStore: MemoryStore started with capacity 511.1 MB 16/10/08 20:03:04 INFO SparkEnv: Registering OutputCommitCoordinator 16/10/08 20:03:04 INFO Utils: Successfully started service 'SparkUI' on port 4040. 16/10/08 20:03:04 INFO SparkUI: Started SparkUI at http://127.0.0.127:4040 16/10/08 20:03:04 INFO HttpFileServer: HTTP File server directory is /tmp/spark-04551a15-1f26-4dc9-9a5e-ff637a6ac1bc/httpd-ebc736d7-339e-4253-ba71-5be690d2fb65 16/10/08 20:03:04 INFO HttpServer: Starting HTTP Server 16/10/08 20:03:04 INFO Utils: Successfully started service 'HTTP file server' on port 46354. 16/10/08 20:03:04 INFO SparkContext: Added JAR file:/usr/local/spark/simpleapp/target/scala-2.10/simple-project_2.10-1.0.jar at http://127.0.0.127:46354/jars/simple-project_2.10-1.0.jar with timestamp 1475974984849 16/10/08 20:03:04 INFO AppClient$ClientEndpoint: Connecting to master spark://spark1:7077... 16/10/08 20:03:05 WARN AppClient$ClientEndpoint: Failed to connect to master spark1:7077 java.io.IOException: Failed to connect to spark1/10.90.110.173:7077 at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:216) at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:167) at org.apache.spark.rpc.netty.NettyRpcEnv.createClient(NettyRpcEnv.scala:200) at org.apache.spark.rpc.netty.Outbox$$anon$1.call(Outbox.scala:187) at org.apache.spark.rpc.netty.Outbox$$anon$1.call(Outbox.scala:183) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: java.net.ConnectException: Connection refused: spark1/10.90.110.173:7077 at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) at io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:224) at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:289) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:528) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354) at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111) ... 1 more 16/10/08 20:03:24 INFO AppClient$ClientEndpoint: Connecting to master spark://spark1:7077... 16/10/08 20:03:24 WARN AppClient$ClientEndpoint: Failed to connect to master spark1:7077 java.io.IOException: Failed to connect to spark1/10.90.110.173:7077 at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:216) at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:167) at org.apache.spark.rpc.netty.NettyRpcEnv.createClient(NettyRpcEnv.scala:200) at org.apache.spark.rpc.netty.Outbox$$anon$1.call(Outbox.scala:187) at org.apache.spark.rpc.netty.Outbox$$anon$1.call(Outbox.scala:183) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: java.net.ConnectException: Connection refused: spark1/10.90.110.173:7077 at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) at io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:224) at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:289) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:528) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354) at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111) ... 1 more 16/10/08 20:03:44 INFO AppClient$ClientEndpoint: Connecting to master spark://spark1:7077... 16/10/08 20:03:44 INFO AppClient$ClientEndpoint: Connecting to master spark://spark1:7077... 16/10/08 20:03:44 WARN AppClient$ClientEndpoint: Failed to connect to master spark1:7077 java.io.IOException: Failed to connect to spark1/10.90.110.173:7077 at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:216) at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:167) at org.apache.spark.rpc.netty.NettyRpcEnv.createClient(NettyRpcEnv.scala:200) at org.apache.spark.rpc.netty.Outbox$$anon$1.call(Outbox.scala:187) at org.apache.spark.rpc.netty.Outbox$$anon$1.call(Outbox.scala:183) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: java.net.ConnectException: Connection refused: spark1/10.90.110.173:7077 at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) at io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:224) at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:289) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:528) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354) at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111) ... 1 more 16/10/08 20:04:04 INFO AppClient$ClientEndpoint: Connecting to master spark://spark1:7077... 16/10/08 20:04:04 INFO AppClient$ClientEndpoint: Connecting to master spark://spark1:7077... 16/10/08 20:04:04 WARN SparkDeploySchedulerBackend: Application ID is not initialized yet. 16/10/08 20:04:04 ERROR SparkDeploySchedulerBackend: Application has been killed. Reason: All masters are unresponsive! Giving up. 16/10/08 20:04:04 WARN AppClient$ClientEndpoint: Failed to connect to master spark1:7077 java.io.IOException: Failed to connect to spark1/10.90.110.173:7077 at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:216) at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:167) at org.apache.spark.rpc.netty.NettyRpcEnv.createClient(NettyRpcEnv.scala:200) at org.apache.spark.rpc.netty.Outbox$$anon$1.call(Outbox.scala:187) at org.apache.spark.rpc.netty.Outbox$$anon$1.call(Outbox.scala:183) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: java.net.ConnectException: Connection refused: spark1/10.90.110.173:7077 at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) at io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:224) at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:289) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:528) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354) at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111) ... 1 more 16/10/08 20:04:04 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 43668. 16/10/08 20:04:04 INFO NettyBlockTransferService: Server created on 43668 16/10/08 20:04:04 INFO BlockManagerMaster: Trying to register BlockManager 16/10/08 20:04:04 INFO BlockManagerMasterEndpoint: Registering block manager 127.0.0.127:43668 with 511.1 MB RAM, BlockManagerId(driver, 127.0.0.127, 43668) 16/10/08 20:04:04 INFO BlockManagerMaster: Registered BlockManager 16/10/08 20:04:05 INFO SparkUI: Stopped Spark web UI at http://127.0.0.127:4040 16/10/08 20:04:05 INFO SparkDeploySchedulerBackend: Shutting down all executors 16/10/08 20:04:05 INFO SparkDeploySchedulerBackend: Asking each executor to shut down 16/10/08 20:04:05 WARN AppClient$ClientEndpoint: Drop UnregisterApplication(null) because has not yet connected to master 16/10/08 20:04:05 ERROR SparkUncaughtExceptionHandler: Uncaught exception in thread Thread[appclient-registration-retry-thread,5,main] java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1039) at java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328) at scala.concurrent.impl.Promise$DefaultPromise.tryAwait(Promise.scala:208) at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:218) at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223) at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:107) at scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53) at scala.concurrent.Await$.result(package.scala:107) at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:75) at org.apache.spark.deploy.client.AppClient.stop(AppClient.scala:290) at org.apache.spark.scheduler.cluster.SparkDeploySchedulerBackend.org$apache$spark$scheduler$cluster$SparkDeploySchedulerBackend$$stop(SparkDeploySchedulerBackend.scala:198) at org.apache.spark.scheduler.cluster.SparkDeploySchedulerBackend.stop(SparkDeploySchedulerBackend.scala:101) at org.apache.spark.scheduler.TaskSchedulerImpl.stop(TaskSchedulerImpl.scala:446) at org.apache.spark.scheduler.DAGScheduler.stop(DAGScheduler.scala:1582) at org.apache.spark.SparkContext$$anonfun$stop$9.apply$mcV$sp(SparkContext.scala:1740) at org.apache.spark.util.Utils$.tryLogNonFatalError(Utils.scala:1229) at org.apache.spark.SparkContext.stop(SparkContext.scala:1739) at org.apache.spark.scheduler.cluster.SparkDeploySchedulerBackend.dead(SparkDeploySchedulerBackend.scala:127) at org.apache.spark.deploy.client.AppClient$ClientEndpoint.markDead(AppClient.scala:264) at org.apache.spark.deploy.client.AppClient$ClientEndpoint$$anon$2$$anonfun$run$1.apply$mcV$sp(AppClient.scala:134) at org.apache.spark.util.Utils$.tryOrExit(Utils.scala:1163) at org.apache.spark.deploy.client.AppClient$ClientEndpoint$$anon$2.run(AppClient.scala:129) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 16/10/08 20:04:05 INFO DiskBlockManager: Shutdown hook called 16/10/08 20:04:05 INFO ShutdownHookManager: Shutdown hook called 16/10/08 20:04:05 INFO ShutdownHookManager: Deleting directory /tmp/spark-04551a15-1f26-4dc9-9a5e-ff637a6ac1bc/userFiles-6d5eb5da-4b3a-4ecd-a5e2-c1799fcc5abb 16/10/08 20:04:05 INFO ShutdownHookManager: Deleting directory /tmp/spark-04551a15-1f26-4dc9-9a5e-ff637a6ac1bc/httpd-ebc736d7-339e-4253-ba71-5be690d2fb65 16/10/08 20:04:05 INFO ShutdownHookManager: Deleting directory /tmp/spark-04551a15-1f26-4dc9-9a5e-ff637a6ac1bc

您的后备存储文件SIGBUS被截断/太短的可能性非常大。 (例如)如果文件有10个结构条目的空间并且您写入第11个,那么您将获得accounts

执行SIGBUS获取fstat并将其与您st_size

提供的长度参数进行比较

在执行mmap

之前,您可能需要考虑使用ftruncate扩展文件