我正在尝试使用coretext来复制UILabel文本绘图,以便了解UILabel中的哪些单词被挖掘(例如:hashtags)。我的代码有效,但它总是通过省略最后一行文本来低估标签的文本。我无法弄清楚为什么尽管头疼几天。
假设我们在UILabel子视图中触及了CGPoint
CGRect textRect = [self textRectForBounds:self.bounds limitedToNumberOfLines:self.numberOfLines]; //Text rect is the text drawing window
textRect.origin.y = (self.bounds.size.height - textRect.size.height)/2;
CTFramesetterRef framesetter = CTFramesetterCreateWithAttributedString((__bridge CFAttributedStringRef)self.attributedText);
// Offset tap coordinates by textRect origin to make them relative to the origin of frame
point = CGPointMake(point.x - textRect.origin.x, point.y - textRect.origin.y);
// Convert tap coordinates (start at top left) to CT coordinates (start at bottom left)
point = CGPointMake(point.x, textRect.size.height - point.y);
CGMutablePathRef path = CGPathCreateMutable();
CGPathAddRect(path, NULL, textRect);
CTFrameRef frame = CTFramesetterCreateFrame(framesetter, CFRangeMake(0, [self.attributedText length]), path, NULL);
CFArrayRef lines = CTFrameGetLines(frame);
NSInteger numberOfLines = self.numberOfLines > 0 ? MIN(self.numberOfLines, CFArrayGetCount(lines)) : CFArrayGetCount(lines);
行数总是比适当的少一个,并且点识别工作完全一直到文本的最后一行(或者,如果文本是一行,根本不起作用)