使用类型提供程序在编译时加载和存储文件

时间:2018-04-28 06:51:36

标签: ffi compile-time type-providers idris

我想在编译时加载(二进制)文件并将其存储在Bytes类型的顶级变量中:

module FileProvider

import Data.Bits
import Data.Bytes
import Data.Buffer

%default total

export
loadImage : String -> IO (Provider Bytes)
loadImage fileName = do
    Right file <- openFile fileName Read
      | Left err => pure (Error $ show err)
    Just buf <- newBuffer size
      | Nothing => pure (Error "allocation failed")
    readBufferFromFile file buf size
    Provide . dropPrefix 2 . pack <$> bufferData buf
  where
    size : Int
    size = 256 * 256 + 2

它似乎在运行时正常工作:

module Main

import FileProvider
import Data.Bits
import Data.Bytes

main : IO ()
main = do
     Provide mem <- loadImage "ram.mem"
       | Error err => putStrLn err
     printLn $ length mem

但是,如果我尝试在编译时运行它,它就会失败并显示一条提到FFI的神秘消息:

module Main

import FileProvider
import Data.Bits
import Data.Bytes

%language TypeProviders
%provide (mem : Bytes) with loadImage "ram.mem"

main : IO ()
main = do
     printLn $ length mem
$ idris -p bytes -o SOMain SOMain.idr 
Symbol "idris_newBuffer" not found
idris: user error (Could not call foreign function "idris_newBuffer" with args [65538])

这里发生了什么以及如何在编译时加载文件的内容?

1 个答案:

答案 0 :(得分:1)

idris_newbufferData.Buffer使用的C函数。来自docs to type providers

  

如果我们想从解释代码(例如REPL或类型提供者)调用我们的外部函数,我们需要动态链接包含我们需要的符号的库。

因此,使用FFI的每个函数都需要链接一个动态库。那是Data.BufferData.ByteArray。让我们关注第一个,看看出现了什么问题:

因此Data.Buffer需要%dynamic "idris_buffer.so"(而不仅仅是%include C "idris_buffer.h"当前拥有的idris/rts/idris_buffer.(h|c)。您可以复制cc -o idris_buffer.so -fPIC -shared idris_buffer.c 并删除需要其他rts-stuff的功能。编译共享库:

Data.Buffer

如果修改后的Could not use <<handle {handle: ram.mem}>> as FFI arg. 使用它,您仍然会收到错误消息:

File

FFI电话在Data.Buffer.readBufferFromFileopenFile参数会导致麻烦。那是因为Idris看到readLine被使用(另一个C函数)和 transforms it in a Haskell call。一方面这很好,因为在编译期间我们正在解释Idris代码,并且使得fopen : String -> String -> IO Ptr C / JS / Node / ......等跟随函数不可知。但在这种情况下,这是不幸的,因为Haskell后端不支持returned file handle for FFI calls。因此,我们可以编写另一个Ptr函数,该函数执行相同的操作但具有其他名称,因此Ptr将保留Could not use prim__registerPtr (<<ptr 0x00000000009e2bf0>>) (65546) as FFI arg. ,因为这些函数可用于FFI调用。

完成此操作后,内置插件会出现另一个错误:

Data.Buffer

ManagedPtr在其后端使用Ptr。是的,在FFI电话中也没有支持。因此,您需要将这些更改为%provide (mem : Buffer)。我想这两个都可以在编译器中得到支持。

最后一切都应该适用于有效的Can't convert pointers back to TT after execution. 。但是,不,因为:

Buffer

即使Idris现在可以在编译时读取文件,但它无法使Pnt或其他任何具有Provider (List Bits8)的运行时可访问 - 这是非常合理的。在程序编译时只有一个指针在运行时只是一个随机的东西。因此,您需要将数据转换为提供程序中的结果,或使用List Bits8等中间格式。

I made a short example可以main访问BufferData.Buffer基本上是_openFilePnt包含ManagedPtr2018-04-29 00:01:12 [scrapy.core.scraper] ERROR: Spider error processing <GET https://www.reddit.com/r/india/?count=50&after=t3_8fh5nv> (referer: https://www.reddit.com/r/india/?count=25&after=t3_8fiqd5) Traceback (most recent call last): File "Z:\Anaconda\lib\site-packages\scrapy\utils\defer.py", line 102, in iter_errback yield next(it) File "Z:\Anaconda\lib\site-packages\scrapy\spidermiddlewares\offsite.py", line 30, in process_spider_output for x in result: File "Z:\Anaconda\lib\site-packages\scrapy\spidermiddlewares\referer.py", line 339, in <genexpr> return (_set_referer(r) for r in result or ()) File "Z:\Anaconda\lib\site-packages\scrapy\spidermiddlewares\urllength.py", line 37, in <genexpr> return (r for r in result or () if _filter(r)) File "Z:\Anaconda\lib\site-packages\scrapy\spidermiddlewares\depth.py", line 58, in <genexpr> return (r for r in result or () if _filter(r)) File "C:\Users\jayes\myredditscraper\myredditscraper\spiders\scrapereddit.py", line 28, in parse yield Request(url=(next_page),callback=self.parse) File "Z:\Anaconda\lib\site-packages\scrapy\http\request\__init__.py", line 25, in __init__ self._set_url(url) File "Z:\Anaconda\lib\site-packages\scrapy\http\request\__init__.py", line 62, in _set_url raise ValueError('Missing scheme in request url: %s' % self._url) ValueError: Missing scheme in request url: 2018-04-29 00:01:12 [scrapy.core.engine] INFO: Closing spider (finished) 代替import scrapy import time from scrapy.http.request import Request from myredditscraper.items import MyredditscraperItem class ScraperedditSpider(scrapy.Spider): name = 'scrapereddit' allowed_domains = ['www.reddit.com'] start_urls = ['http://www.reddit.com/r/india/'] def parse(self,response): next_page='' titles=response.css("a.title::text").extract() links=response.css("a.title::attr(href)").extract() votes=response.css("div.score.unvoted::attr(title)").extract() for item in zip(titles,links,votes): new_item = MyredditscraperItem() new_item['title']=item[0] new_item['link']=item[1] new_item['vote']=item[2] yield new_item next_page = response.css("span.next- button").css('a::attr(href)').extract()[0] if next_page is not None: yield Request(url=(next_page),callback=self.parse) 。我希望这能以某种方式帮助你,但也许一些编译人员可以提供更多的背景知识。