Native Binder
Table of Contents
1. Native Binder
1.1. BpInterface
BpInterface:IInterface
BpInterface resides in the proxy part, but it't nothing but an encapsulation of the remote IBinder (actually BpBinder)
it's most import method is BpInterface.asInterface(IBinder) , it will implement those IXX.xx functions and dispatch them to BpBinder.transact().
1.2. IInterface
1.3. BnInterface
BnInterface:BBinder,IInterface so, BnInterface resides in the stub part, and mainly act as two parts:
- it extends IInterface, so it will implement IXX.xx functions
- it extends BBinder, so it will implement onTransact(), it will be called by BBinder.transact(), and will dispatch transact code to it's own IXX.xx functions.
1.4. BBinder
BBinder corresponds to BpBinder, it is the real stub part.
when IPCThreadState discovers an IPC, it will call BBinder.transact(), which will call deprived class's (which is usually BnInterface) onTransact()
1.5. BpBinder
when BnInterface, as the stub part,is returned to client as IBinder through Parcel.writeStrongBinder and Parcel.readStrongBinder, the proxy part actually will get a BpBinder as IBinder (since both BpBinder and BBinder extends IBinder).
To make it clear, BpInterface use BpBinder, and BnInterface extends BBinder
BpBinder's most import method is transact(), which will call IPCThreadState.transact() to interact will binder driver.
BpBinder's member variable `handle` could be used by the driver to distinguish it from other BpBinder and found the corresponding strub process.
1.6. Binder Thread
1.7. ProcessState
both the proxy and stub process has one and only one ProcessState.
- proxy part:
BpBinder->transact()->IPCThreadState.transact()
IPCThreadState need to use ProcessState to interact with binder driver
- stub part:
1.7.1. StartThreadPool
1.8. IPCThreadState
1.8.1. joinThreadPool
1.8.2. transact
1.8.3. waitForResponse
1.9. Parcel
1.9.1. Parcle.freeData()
Parcel::freeData() Parecel::mOwner() ioctl(BINDER_WRITE_READ) with BC_FREE_BUFFER drv::binder_thread_write() free buffer of the proc
Parcel::mOwner is actually a callback fun, it is registered to the parcel in BC_TRANSACTION (server got the data parcel) or BC_REPLY (client got the reply parcel), through `ipcSetDataReference`. The callback will ioctl to the driver the free the parcel.
通常情况下 parcel 都不需要应用层主动去 free:
对于 server 收到的 data parcel, IPCTheadState 负责调用 parcel 析构函数来 freeData
IPCThreadState::executeCommand case BR_TRANSACTION: Parcel buffer; buffer.ipcSetDataReference(...) b->transact(tr.code, buffer, &reply, tr.flags); // server 的 transact() 返回后, buffer 的析构函数负责 freeData
对于 client 的 reply parcel, 应用需要自己控制, 但 aidl 生成的代码会帮助 java 应用去处理
virtual status_t onAuthenticated(int64_t devId, int32_t fpId, int32_t gpId) { Parcel data, reply; data.writeInterfaceToken(IFingerprintDaemonCallback::getInterfaceDescriptor()); data.writeInt64(devId); data.writeInt32(fpId); data.writeInt32(gpId); return remote()->transact(ON_AUTHENTICATED, data, &reply, IBinder::FLAG_ONEWAY); // reply 这里析构 }
@Override public int test(int x) throws android.os.RemoteException { android.os.Parcel _data = android.os.Parcel.obtain(); android.os.Parcel _reply = android.os.Parcel.obtain(); int _result; try { _data.writeInterfaceToken(DESCRIPTOR); _data.writeInt(x); mRemote.transact(Stub.TRANSACTION_test, _data, _reply, 0); _reply.readException(); _result = _reply.readInt(); } finally { // 这里主动 recycle reply _reply.recycle(); _data.recycle(); } return _result; }