CWAC Saferoom和代码收缩,如何在拥有Saferoom的同时收缩代码? [已修复]

时间:2019-10-08 06:37:40

标签: android sqlcipher commonsware-cwac

我需要在CWAC Saferoom中实现android的R8代码压缩器的帮助。

两者都实现良好,并在调试模式下进行了测试,但是当我生成发行版APK时,将显示此堆栈跟踪,并且应用程序崩溃:

2019-10-08 14:10:32.890 22013-22013/? A/.sample: thread.cc:2166] No pending exception expected: java.lang.NoSuchFieldError: no "J" field "mNativeHandle" in class "Lnet/sqlcipher/database/SQLiteDatabase;" or its superclasses
2019-10-08 14:10:32.890 22013-22013/? A/.sample: thread.cc:2166]   at java.lang.String java.lang.Runtime.nativeLoad(java.lang.String, java.lang.ClassLoader) (Runtime.java:-2)
2019-10-08 14:10:32.890 22013-22013/? A/.sample: thread.cc:2166]   at void java.lang.Runtime.loadLibrary0(java.lang.ClassLoader, java.lang.String) (Runtime.java:1014)
2019-10-08 14:10:32.890 22013-22013/? A/.sample: thread.cc:2166]   at void java.lang.System.loadLibrary(java.lang.String) (System.java:1672)
2019-10-08 14:10:32.890 22013-22013/? A/.sample: thread.cc:2166]   at void net.sqlcipher.database.SQLiteDatabase$a.a(java.lang.String[]) (:-1)
2019-10-08 14:10:32.890 22013-22013/? A/.sample: thread.cc:2166]   at void net.sqlcipher.database.SQLiteDatabase.a(net.sqlcipher.database.SQLiteDatabase$e) (:-1)
2019-10-08 14:10:32.890 22013-22013/? A/.sample: thread.cc:2166]   at void net.sqlcipher.database.SQLiteDatabase.a(android.content.Context, java.io.File) (:-1)
2019-10-08 14:10:32.890 22013-22013/? A/.sample: thread.cc:2166]   at void net.sqlcipher.database.SQLiteDatabase.a(android.content.Context) (:-1)

我相信我丢失了一些东西,也许排除了proguard文件中的安全室软件包(如果使用proguard),或者在缩小代码时是否必须在依赖项中添加SQLCipher库,如果没有,我全力以赴的想法。

注意:

  • 由于隐私原因,我更改了应用程序ID和堆栈跟踪ID
  • 仅当我生成发行版APK时,我的问题才会发生,另一方面,SQLCipher数据库运行良好且实现良好

应用程序的构建包

android {
    compileSdkVersion 28
    buildToolsVersion "28.0.3"
    defaultConfig {
        applicationId "sample.id"
        minSdkVersion 23
        targetSdkVersion 28
        versionCode 1
        versionName "0.1.0"
        testInstrumentationRunner "android.support.test.runner.AndroidJUnitRunner"
    }
    buildTypes {
        release {
            useProguard false
            minifyEnabled true
            proguardFiles getDefaultProguardFile('proguard-android-optimize.txt'), 'proguard-rules.pro'
        }
    }
}

dependencies {
    implementation 'com.android.support:appcompat-v7:28.0.0'
    implementation 'com.android.support:design:28.0.0'
    implementation 'com.android.support.constraint:constraint-layout:1.1.3'

    // JUnit Library
    testImplementation 'junit:junit:4.12'
    androidTestImplementation 'com.android.support.test:runner:1.0.2'
    androidTestImplementation 'com.android.support.test.espresso:espresso-core:3.0.2'

    // Room Database Library
    implementation "android.arch.persistence.room:runtime:1.1.1"
    implementation 'com.android.support:design:28.0.0'
    implementation 'com.android.support:support-annotations:28.0.0'
    implementation 'android.arch.lifecycle:extensions:1.1.1'
    annotationProcessor "android.arch.persistence.room:compiler:1.1.1"

    // CWAC SafeRoom Library
    implementation "com.commonsware.cwac:saferoom:1.2.1"

//... some unimportant android libraries
}


解决方案:

根据@commonsware的说明,将以下内容添加到您的proguard文件中(无论您使用的是Android R8还是proguard),它应该可以正常工作:

-keep class net.sqlcipher.** { *; }
-keep class net.sqlcipher.database.* { *; }

1 个答案:

答案 0 :(得分:0)

基于this issue,将其添加到ProGuard保留规则:

import cv2
import numpy as np
import time

start_time = time.time()
count=0
ID=[0,1,2,3,4,5,6,7,8,9]
TrackList=[]

def nothing(x):
    pass

def isolate(img, vertices):
    mask=np.zeros_like(img)
    channelcount=img.shape[2]
    match=(255, )*channelcount
    cv2.fillPoly(mask, vertices, match)
    masked=cv2.bitwise_and(img, mask)
    return masked

#read video input
cap=cv2.VideoCapture('testGood.mp4')

#background removal initiation either KNN or MOG2, KNN yeilded best results in testing
back=cv2.createBackgroundSubtractorKNN()

#grab initial frames
_,frameCap1=cap.read()
check , frameCap2=cap.read()

#main loop
while cap.isOpened:
    #ensure there are frames to read
    if check == False:
        break

    #image preprocessing
    #declare region of interest eliminating some background issues
    tlX,tlY,blX,blY,brX,brY,trX,trY=400,0,400,800,1480,800,1480,0
    region=[(tlX,tlY),  (blX, blY),(brX,brY) , (trX, trY) ]

    grab=isolate(frameCap1,np.array([region],np.int32))
    frame=cv2.pyrDown(grab)

    #isolate region of interest
    roi1=isolate(frameCap1,np.array([region],np.int32))
    roi2=isolate(frameCap2,np.array([region],np.int32))

    #drop resolution of working frames
    frame1=cv2.pyrDown(roi1)
    frame2=cv2.pyrDown(roi2)

    #apply background subraction
    fgmask1=back.apply(frame1)
    fgmask2=back.apply(frame2)

    #remove shadow pixels and replace them with black pixels or white pixels(0 or 255)
    fgmask1[fgmask1==127]=0
    fgmask2[fgmask2==127]=0

    #apply a threshhold, not necessary but cleans ups some grey noise
    _,thresh1=cv2.threshold(fgmask1,200,255,cv2.THRESH_BINARY)
    _,thresh2=cv2.threshold(fgmask2,200,255,cv2.THRESH_BINARY)

    #find movement
    diff=cv2.absdiff(thresh1,thresh2)
    contours, _=cv2.findContours(diff,cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)   
    movement=False
    moveBox=[]
    for contour in contours:
        if cv2.contourArea(contour)<1350 or cv2.contourArea(contour)>3500:
            continue
        #cv2.rectangle(frame,(x,y), (x+w,y+h),(0,255,0),2)
        (x,y,w,h)=cv2.boundingRect(contour)
        moveBox.append([x,y,w,h])
        movement=True
        continue
        #cv2.putText(frame, 'Status: ()'.format('Movement'),(x,y),cv2.FONT_HERSHEY_SIMPLEX,1,(0,0,255),3)

    #update existing IDs
    for tracked in TrackList:
        success, box=tracked[2].update(frame)
        if success:
            x,y,w,h=[int(v) for v in box]
            cv2.rectangle(frame, (x,y), (x+w, y+h),(0,0,255),2)
            cv2.rectangle(thresh1, (x,y), (x+w, y+h),(255,255,255),2)
            tracked[3].append([x,y,w,h])
        else:
            tracked[3].append(None)

    #check for tracking which has stopped or tracking which hasnt moved
    delList=[]
    p=0
    for tracked in TrackList:
        if len(tracked[3])==1:
            continue
        moved=True
        n=len(tracked[3])-1
        if  tracked[3][n]==tracked[3][n-1] and tracked[3][0]!=tracked[3][n]:
            if tracked[3][n][1]>tracked[3][0][1]:
                count+=1
                print('count1: ',count)
                ID.append(tracked[0])
                cv2.putText(frame, 'Counted',(tracked[3][-2][0],tracked[3][-2][1]),cv2.FONT_HERSHEY_SIMPLEX,1,(0,200,255),3)
                delList.append(p)
            else:
                ID.append(tracked[0])
                delList.append(p)
                print('discard 1')
                cv2.putText(frame, 'discard 1',(tracked[3][-2][0],tracked[3][-2][1]),cv2.FONT_HERSHEY_SIMPLEX,1,(0,200,255),3)
                print(tracked)
        elif n>5 and tracked[3][n]==tracked[3][n-1] and tracked[3][0]==tracked[3][n]:
            ID.append(tracked[0])
            delList.append(p)
            cv2.putText(frame, 'discard 1',(tracked[3][-2][0],tracked[3][-2][1]),cv2.FONT_HERSHEY_SIMPLEX,1,(0,200,255),3)
            print('discard 2')
        elif tracked[3][-1]==None:
            count+=1
            print('count2: ',count)
            ID.append(tracked[0])
            cv2.putText(frame, 'Counted',(tracked[3][-2][0],tracked[3][-2][1]),cv2.FONT_HERSHEY_SIMPLEX,1,(0,200,255),3)
            delList.append(p)
        p+=1

    cv2.putText(frame, 'Count: '+str(count),(50,50),cv2.FONT_HERSHEY_SIMPLEX,1,(0,200,255),3)

    if len(delList)>0:
        for a in delList:
            TrackList[a]=None

    #remove dead IDs
    cleaned=False
    while cleaned==False:
        try:
            TrackList.remove(None)
        except ValueError:
            cleaned=True

    #check if movement was being tracked
    untracked=[]
    if movement==True:
        checkContours,_=cv2.findContours(thresh1,cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
        for contour in checkContours:
            tracked=False
            if 3500>cv2.contourArea(contour)>1350:
                (x,y,w,h)=cv2.boundingRect(contour)
                for box in TrackList:
                    if box[3][-1][0]<x+w/2<box[3][-1][0]+box[3][-1][2] and box[3][-1][1]<y+h/2<box[3][-1][1]+box[3][-1][3]:
                        tracked=True
                if tracked==False:
                    #print('false')
                    (x,y,w,h)=cv2.boundingRect(contour)
                    cv2.rectangle(frame, (x,y), (x+w, y+h),(255,0,0),2)
                    cv2.rectangle(frame, (x,y), (x+w, y+h),(255,255,255),2)
                    untracked.append([x,y,w,h])

    #assign tracking
    ID.sort()
    for unt in untracked:
        idtemp=ID.pop(0)
        tempFrame=frame
        temp=[idtemp, 0, cv2.TrackerCSRT_create(),[unt]]
        temp[2].init(tempFrame,(unt[0],unt[1],1.10*unt[2],1.10*unt[3]))
        TrackList.append(temp)

    #show frames
    cv2.imshow('frame 1',frame)
    #cv2.imshow('frame 2',thresh1)

    #read new frame
    frameCap1=frameCap2
    check, frameCap2=cap.read()

    #wait delay for a key to be pressed before continuing while loop or exiting
    key=cv2.waitKey(1) & 0xFF
    if key==27:
        break

cap.release()
cv2.destroyAllWindows()
print(count)
print("runtime: %s seconds" % (time.time() - start_time))