我正在尝试使用PyObjC覆盖带有一些文本的图像,同时努力回答我的问题,"Annotate images using tools built into OS X"。通过引用CocoaMagic,RMagick的RubyObjC替换,我想出了这个:
#!/usr/bin/env python
from AppKit import *
source_image = "/Library/Desktop Pictures/Nature/Aurora.jpg"
final_image = "/Library/Desktop Pictures/.loginwindow.jpg"
font_name = "Arial"
font_size = 76
message = "My Message Here"
app = NSApplication.sharedApplication() # remove some warnings
# read in an image
image = NSImage.alloc().initWithContentsOfFile_(source_image)
image.lockFocus()
# prepare some text attributes
text_attributes = NSMutableDictionary.alloc().init()
font = NSFont.fontWithName_size_(font_name, font_size)
text_attributes.setObject_forKey_(font, NSFontAttributeName)
text_attributes.setObject_forKey_(NSColor.blackColor, NSForegroundColorAttributeName)
# output our message
message_string = NSString.stringWithString_(message)
size = message_string.sizeWithAttributes_(text_attributes)
point = NSMakePoint(400, 400)
message_string.drawAtPoint_withAttributes_(point, text_attributes)
# write the file
image.unlockFocus()
bits = NSBitmapImageRep.alloc().initWithData_(image.TIFFRepresentation)
data = bits.representationUsingType_properties_(NSJPGFileType, nil)
data.writeToFile_atomically_(final_image, false)
当我运行它时,我明白了:
Traceback (most recent call last):
File "/Users/clinton/Work/Problems/TellAtAGlance/ObviouslyTouched.py", line 24, in <module>
message_string.drawAtPoint_withAttributes_(point, text_attributes)
ValueError: NSInvalidArgumentException - Class OC_PythonObject: no such selector: set
查看drawAtPoint:withAttributes:的文档,它说:“只有在NSView具有焦点时才应该调用此方法。” NSImage不是NSView的子类,但是我希望这会起作用,而且一些非常相似的东西似乎在Ruby示例中有效。
我需要做些什么来改变这项工作?
我重写了代码,将它忠实地,逐行地转换为Objective-C Foundation工具。它没有问题。 [如果有理由这样做,我会很乐意发布。]
然后问题变成了:怎么做:
[message_string drawAtPoint:point withAttributes:text_attributes];
与
不同message_string.drawAtPoint_withAttributes_(point, text_attributes)
?有没有办法告诉哪个“OC_PythonObject”引发了NSInvalidArgumentException?
答案 0 :(得分:1)
以上是代码中的问题:
text_attributes.setObject_forKey_(NSColor.blackColor, NSForegroundColorAttributeName)
->
text_attributes.setObject_forKey_(NSColor.blackColor(), NSForegroundColorAttributeName)
bits = NSBitmapImageRep.alloc().initWithData_(image.TIFFRepresentation)
data = bits.representationUsingType_properties_(NSJPGFileType, nil)
->
bits = NSBitmapImageRep.imageRepWithData_(image.TIFFRepresentation())
data = bits.representationUsingType_properties_(NSJPEGFileType, None)
确实是小错字。
请注意,代码的中间部分可以替换为这种更易读的变体:
# prepare some text attributes
text_attributes = {
NSFontAttributeName : NSFont.fontWithName_size_(font_name, font_size),
NSForegroundColorAttributeName : NSColor.blackColor()
}
# output our message
NSString.drawAtPoint_withAttributes_(message, (400, 400), text_attributes)
通过查看NodeBox的源代码,psyphography.py和cocoa.py的十二行,特别是save和_getImageData方法,我了解到了这一点。