adb over WIFI

Connecting to device over WIFI is very convenient, here are the steps to achieve the same –

  • Plug in your device to the PC and ensure that adb is able to detect the device –
$adb devices
List of devices attached
<device-id> device
  • Ensure that the device and the PC are connected to the same network domain.
  • Find out the IP address of the device. Go to Settings > Status > IP Address
  • Start the adb in TCP mode.
$ adb tcip <port-number>
restarting in TCP mode port: <port-number>
  • Connect to the device on its IP address via the adb command
$ adb connect <ip-address>
connected to <ip-address>:<port-number>
  • Running adb devices should show that it is connected to the device over USB and IP address
$ adb devices
<device-id> device
<ip-address>:<port-number> device
  • Now remove the USB cable and you should still be connected to the device
$ adb shell
device:/ $

Remote debugging platform apps via Android Studio

You can do step through your platform application using the Android Studio. Here are the steps involved –

  1. Import your application source into the Android Studio. There is no need to get it to build.
  2. Create a Remote Debugging configuration.
    • Run > Edit Configurations
    • Click on Remote Configuration, click on “+” sign
    • Change the port to 8700
    • Click on OK.
  3. Put breakpoints in your code in the Android Studio.
  4. Start your application on the device.
  5. Open the Android monitor, click your application package name. (It should a green bug to indicate that the debugging session is connected.
  6. Hit the debug button in the Android Studio.
  7. Start your use case and it should be good to go.

This post only summarizes another blog post mentioned below. Purpose of this post is to ensure that I don’t lose the link to the original post :). Please visit the source for more information on this topic.

Source – http://ronubo.blogspot.in/2016/01/debugging-aosp-platform-code-with.html

 


Android N and Shared Libraries

Starting with Android N, the android applications will not be allowed to access any of the shared libraries that are present on the system image. The point is to self contain the android application without depending on the libraries that are present on the device itself. This is great from the point of view of Android and its users but not so great if you are developing an android application, another headache to take care of.

Incase you do try to access the libraries from the application then you would get an error on the following lines –

11-15 00:19:13.733 1561 1561 E AndroidRuntime: java.lang.UnsatisfiedLinkError: dlopen failed: library “/system/lib64/libmynativelib.so” needed or dlopened by “/system/lib64/libnativeloader.so” is not accessible for the namespace “classloader-namespace”
11-15 00:19:13.733 1561 1561 E AndroidRuntime: at java.lang.Runtime.loadLibrary0(Runtime.java:977)
11-15 00:19:13.733 1561 1561 E AndroidRuntime: at java.lang.System.loadLibrary(System.java:1530)
11-15 00:19:13.733 1561 1561 E AndroidRuntime: at com.example.testapp.Test.<clinit>(Test.java:48)

We can solve this issue in two ways –

1)  If you are developing an application using Android Studio then make all your libraries self contained inside the apk, just add a jni folder in the android app folder structure and place the libs in that.

2)  If you are building it via AOSP build using Android.mk files then I haven’t found a way to make it a part of the APK since in this build approach no matter what you do the libraries don’t get included inside the APK. Since we have control over the system image (Because you are building the AOSP image) change the file – “public.libraries.txt” to include all the libraries that you want to be exempted from this rule. This file should be present in “/etc/” or “/vendor/etc/”

References –
1) http://android-developers.blogspot.in/2016/06/improving-stability-with-private-cc.html
2) https://source.android.com/devices/tech/config/namespaces_libraries.html


Enabling coredump and analysis on Linux

Coredump contains the recorded state of working memory of a program when it crashed or terminated abnormally. This becomes very essential when you are program is complex and crashed half way through the execution and you have no idea what caused the crash.

In Android, when an application crashes, it prints out the stack trace along with the last known address which resulted in the crash into the Android logcat logs. In Linux this is not enabled by default, this post will help you to enable the core dump on Linux.

We will use this simple application to cause a crash

#include <stdio.h>

int main() {
                int a = 5, b = 0;
                int c = a / b;
                printf ("Value of c is %d", c);
                return 0;
}

Compile this source code using gcc –

$ gcc -g main.c

The g flag to gcc will enable the debug symbols, without this the coredump file will not be very useful. You can also use -ggdb option to produce debug symbols that gdb can use. More info debuggion options available on gcc can found here.

Run the program –

$ ./a.out
Floating point exception.

Okay, so our program crashes as expected. But the message “Floating point exception” is not very helpful in telling us where exactly this problem happened and also note that there was no message about the core being dumped.

Let’s turn on the coredump, using the ulimit cmd we can check the size of core file size.

$ ulimit -a
core file size          (blocks, -c) 0
…

Increase the size to unlimited

$ ulimit –c unlimited

Check whether the change has taken effect –

$ulimit –a
core file size          (blocks, -c) unlimited
…

Now we need to ensure that coredump is redirected to a file instead of someplace else. To check what is your current setting run–

$ cat /proc/sys/kernel/core_pattern
|/usr/share/apport/apport %p %s %c

The “|” indicates to the kernel that the rest of the pattern is a command to be run. Save this command if you want you restore your default settings. To redirect it to a file in the current working directory, we will overwrite the pattern with –

$  echo "core.%e.%p" > /proc/sys/kernel/core_pattern

This will create a file called core.<output_file_name>.<process_id> in the current working directory.

Now that everything is in place, let’s run the program again –

$./a.out
Floating point exception (core dumped)
$ ls
a.out core.a.out.3244 main.c

The core dump has been generated. Now let’s use gdb to look into the coredump file. Start gdb for the coredump file

$ gdb <path_to_bin> <path_to_coredump_file>
...
Core was generated by `./a.out'.
Program terminated with signal SIGFPE, Arithmetic exception.
#0  0x0000000000400547 in main () at main.c:5
5           int c = a / b;

 

By default, it shows the last executed statement and you can run other gdb commands like –

  • bt – shows the backtrace
(gdb) bt
#0  0x0000000000400547 in main () at main.c:5
  • list – shows the source code, shows 10 lines by default
(gdb) list
1       #include <stdio.h>
2
3       int main() {
4           int a = 5, b = 0;
5           int c = a / b;
6           printf("Value of c is %d", c);
7           return 0;
8       }
  • info locals – Values for local variables
(gdb) info locals
a = 5
b = 0
c = 0
  • print <variable_name> – Prints the value of the variable
(gdb) print a
$1 = 5

Those were the few commands that were useful to me, there are more commands that you can use for GDB and you can find them on the internet or typing help in gdb console.


Creating a header file from a java class file

Javah command provides this utility. The header file by default gets generated in the same location where the command is run (This can be changed too, just run javah to get a list of all the available options)

Usage – javah <Fully qualified class name>

You can also provide location to the classes to this command as an argument, use “classpath” option. The directory should point to the location where it contains com/example/*.classes.
E.g., javah –classpath ../../this/is/points/to/my/classes/dir com.example.testheader


jar, javac and Android

TL;DR – Each Android build has a required javac version, if there is a mismatch between this version and javac used to create a jar file bad things will happen.

Story Version – Integrating a jar into an Android application is pretty straight forward and well documented. (For the lazy this link shows you a couple of ways to do it.)

My setup though is a little different, I was trying to include the jar file using the Android build system (Marshmallow with Jack and Jill compiler turned on by default). Creating a jar in android build system is pretty easy to do (It can be created by including BUILD_STATIC_JAVA_LIBRARY in the Android.mk file and in the application refer to the new jar using LOCAL_STATIC_JAVA_LIBRARIES)

But what if you wanted to created the jar outside of the Android build system, pretty straight forward right – build all the class files and create a jar file using the cmd javac from cmd line (you can also build jar files in Eclipse and Android Studio). While trying to include the jar created this way, I was getting a very strange error – “Binary transformation of <jarfile> failed.”

A little digging around in the source code gave pointed me to this location, which points to the Jill tool, makes sense I am trying to include the jar and the Jill tool is responsible for converting the jar classes to the Android library format. The error though was still not very helpful at all, with an another round of googling I found this patch – “bad class file magic (cafebabe) or version (0034.0000)”. Now we are getting somewhere, the version mentioned 34 is in hexadecimal, which is 52 in decimal, looking at the Java class file description, it 52 means that it is for Java SE 1.8. Now this made sense since I used JDK 1.8 to compile the jar, so 1.8 is not supported?

So I tried to create the jar using my linux setup which had JDK 1.7 installed, and this jar compiled straight away!! So looks like Android full build has a dependency on the java compiler version?! For now it looks like it, since when I built the jar using the Android full build system the jar works fine and when I compiled it with JDK 1.7 it works as well. So I started looking (grepping rather) for any inkling of java compiler version mentioned in the make file somewhere in the build system.

Lo and behold what I found – The required javac version for Android build system is mentioned in <root>/build/core/main.mk with the variable name as “required_javac_version” and no points for guessing what this was pointing to,  1.7 🙂

Further exploring the javac options, I found that you can tell the compiler which version of the java you want to target and also what feature set are you targeting in the java source files, both of these can be passed to the compiler as options – eg -javac -source 1.7 -target 1.7 .\testJar.java

One thing that threw me off was that a jar that I had received from an another team was compiling fine, and this jar was build using the javac and Ant build system and not in Android build system. I spoke to this guy and he said he was using JDK 1.8 as well!! That didn’t make any sense, so I asked him to show me his makefile for the Ant build system and this is what I found – “<javac target=”1.7″ destdir=”${build.dir}/classes-ant”>” 🙂

We take the tools we use everyday for granted, if it works we look no further, this exercise has taught me a valuable lesson to learn the tools of the trade and that source code triumphs everything else. Don’t believe everything everyone says or don’t take your tools for granted.