Android Adventures, part 5: access the camera in Processing

Go to part 4…, part 6…

This became far more difficult than anything previous I’d tried to do with the hardware :)

I thought it would be a simple matter to access the camera’s pixel data in Processing, but that was not the case.   And I should point out I can’t take credit for everything below:  The camera passes back a byte stream encoded in YUV format, that my brain simply couldn’t\wouldn’t decode.  I’d already ran across the Ketai library before (here, & here), and discovered that they had written a YUV decoder function (since they’ve already completed this exercise I’m trying…), so my solution below uses a direct implementation of their code.  So a huge thank you to that project!

As well, I used concepts for my CameraSurfaceView class from examples in the book  Android Wireless Application Development, page 340.

At any rate, it works.  Camera pixel data is passed to Processing, and displayed as a PImage on the screen.  It’s not fast (1 fps?), which is a bit disappointing, but it’s a start!

Eric Pavey - 2010-11-15

Set Sketch Permissions : CAMERA
Add to AndroidManifest.xml:
    uses-feature android:name=""
    uses-feature android:name=""

import android.content.Context;
import android.hardware.Camera;
import android.hardware.Camera.PreviewCallback;
import android.view.SurfaceHolder;
import android.view.SurfaceHolder.Callback;
import android.view.SurfaceView;
import android.view.Surface;

// Setup camera globals:
CameraSurfaceView gCamSurfView;
// This is the physical image drawn on the screen representing the camera:
PImage gBuffer;

void setup() {
  size(screenWidth, screenHeight, A2D);

void draw() {
  // nuttin'... onPreviewFrame below handles all the drawing.

// Override the parent (super) Activity class:
// States onCreate(), onStart(), and onStop() aren't called by the sketch.  Processing is entered
// at the 'onResume()' state, and exits at the 'onPause()' state, so just override them:

void onResume() {
  // Sete orientation here, before Processing really starts, or it can get angry:

  // Create our 'CameraSurfaceView' objects, that works the magic:
  gCamSurfView = new CameraSurfaceView(this.getApplicationContext());


class CameraSurfaceView extends SurfaceView implements SurfaceHolder.Callback, Camera.PreviewCallback {
  // Object that accesses the camera, and updates our image data
  // Using ideas pulled from 'Android Wireless Application Development', page 340

  SurfaceHolder mHolder;
  Camera cam = null;
  Camera.Size prevSize;

  // SurfaceView Constructor:  : ---------------------------------------------------
  CameraSurfaceView(Context context) {
    // Processing PApplets come with their own SurfaceView object which can be accessed
    // directly via its object name, 'surfaceView', or via the below function:
    // mHolder = surfaceView.getHolder();
    mHolder = getSurfaceHolder();
    // Add this object as a callback:

  // SurfaceHolder.Callback stuff: ------------------------------------------------------
  void surfaceCreated (SurfaceHolder holder) {
    // When the SurfaceHolder is created, create our camera, and register our
    // camera's preview callback, which will fire on each frame of preview:
    cam =;

    Camera.Parameters parameters = cam.getParameters();
    // Find our preview size, and init our global PImage:
    prevSize = parameters.getPreviewSize();
    gBuffer = createImage(prevSize.width, prevSize.height, RGB);

  void surfaceChanged(SurfaceHolder holder, int format, int w, int h) {
    // Start our camera previewing:

  void surfaceDestroyed (SurfaceHolder holder) {
    // Give the cam back to the phone:
    cam = null;

  //  Camera.PreviewCallback stuff: ------------------------------------------------------
  void onPreviewFrame(byte[] data, Camera cam) {
    // This is called every frame of the preview.  Update our global PImage.
    // Decode our camera byte data into RGB data:
    decodeYUV420SP(gBuffer.pixels, data, prevSize.width, prevSize.height);
    // Draw to screen:
    image(gBuffer, 0, 0);

  //  Byte decoder : ---------------------------------------------------------------------
  void decodeYUV420SP(int[] rgb, byte[] yuv420sp, int width, int height) {
    // Pulled directly from:
    final int frameSize = width * height;

    for (int j = 0, yp = 0; j < height; j++) {       int uvp = frameSize + (j >> 1) * width, u = 0, v = 0;
      for (int i = 0; i < width; i++, yp++) {
        int y = (0xff & ((int) yuv420sp[yp])) - 16;
        if (y < 0)
          y = 0;
        if ((i & 1) == 0) {
          v = (0xff & yuv420sp[uvp++]) - 128;
          u = (0xff & yuv420sp[uvp++]) - 128;

        int y1192 = 1192 * y;
        int r = (y1192 + 1634 * v);
        int g = (y1192 - 833 * v - 400 * u);
        int b = (y1192 + 2066 * u);

        if (r < 0)
           r = 0;
        else if (r > 262143)
           r = 262143;
        if (g < 0)
           g = 0;
        else if (g > 262143)
           g = 262143;
        if (b < 0)
           b = 0;
        else if (b > 262143)
           b = 262143;

        rgb[yp] = 0xff000000 | ((r << 6) & 0xff0000) | ((g >> 2) & 0xff00) | ((b >> 10) & 0xff);

Go to part 4…, part 6…

Android Adventures, part 4: vibrate Processing
Android Adventures, part 6: Building a signed .apk from Processing
    • Daniel
    • November 16th, 2010 1:14pm

    What is the reason for setting your RGB values to 262143? Shouldn’t the values go from 0->255? What I am basically trying to accomplish is to evaluate RGB values from 0-255 and create a grayscale equivalent. So I need RGB888 coloring.

  1. The whole function decodeYUV420SP() is from the Ketai lib (as I’d mentioned above), not my code. They set those values, and them bit-shift them to legible hexadecimal vals on the last line (at least, that’s my take on it) which Processing can intercept as valid colors.

    By default the API format is NV21:
    Which is a ‘YCrCb’ format, also known as YUV.
    Based on my experience (with my Galaxy S) telling the API to ouput a different format simply wouldn’t work. It’d be great if it returned rgb888, but I was unable to make this happen, thus the need to find a yuv converter.

    • francesco
    • November 16th, 2010 2:08pm

    I’ve implemented the ketai conversion routine in a sketch I wrote as well, and I have more or less same sluggish frame rate, for what I caould see the bottleneck is indeed in yuv->rgb conversion

  2. As a test I tossed the pixels up there with none of the conversion (just grabbed the value, so it was a weird gray-scale), and it was nearly just as slow. I think it may be the actual process of updating the PImage’s pixels is what’s slowing it down.

    I posted this topic on the Android-Processing forum, and someone suggested this post:
    Where it copies the camera pixel data to an openGL texture, and supposedly goes a lot faster. I have yet to test it, but plan on it 😉

  3. Thanks a ton for this. Amazingly this works in real-time on my LG G-Slate…now to figure out how to access the various cameras and play more. Cheers!

    • Tim
    • June 21st, 2011 5:01am

    Anyone know how to activate and deactivate the camera on command?
    For example if i want to write an app that turns the camera on when i touch the screen, and turns it off when i touch it again.

    So far the best i can do is stopping the preview, but the actual camera hardware is still active.

  4. Actually I’ve been meaning to learn that too, but haven’t got around to it yet. “Work” keeps getting in the way 😛

    • Tim
    • June 21st, 2011 9:05am

    oh my god i got it working. what an unexpected surprise :)

  5. Nice. Got some code samples anywhere? 😉

    • Tim
    • June 21st, 2011 9:24pm

    Are there any special tags i need to put here to paste code?

    • Patrick Pagano
    • June 22nd, 2011 4:15am

    i keep getting this error is already defined in this compilation unit

    when i try to run this for android. Any Tips on now to solve it?


  6. Shoot, I don’t know Tim, I figured just a link somewhere 😉 Cool you got it working.

    Hey Pat: I’ve not got that error before either. But if you Google:
    “is already defined in this compilation unit”
    There are quite a few hits.

    • Tim
    • June 22nd, 2011 7:41am

    Switching the Camera hardware on and off.

    Sorry if it’s not easily readable.

    • Josh Peek
    • June 22nd, 2011 1:21pm

    Neato! I have been having success running Android Processing on my Sony Xperia, but this script dies at:

    cam =;


    java.lang.RuntimeException: Fail to connect to camera service

    any thoughts out there?


    • Josh Peek
    • June 22nd, 2011 2:28pm

    @Josh Peek

    Oh god, that was total n00b FAIL: RTFM. Sorry, forget I asked anything!

    Awesome script!


    • Josh Peek
    • June 24th, 2011 1:33pm

    Okay, I have a better question.

    I am using a phone with a 16:9 (ish) aspect ratio screen, but the preview frames seem to be 4:3. this means I have a ugly white band on the right side of my image. It looks like the image is off-center as well — as if it is missing a piece of the preview image. Any idea how to convince the camera to send the correct sized preview image?


    • Tim
    • June 25th, 2011 5:35am

    by default the camera is set to 640×480.
    i haven’t tried this myself, but you could try

    //the code up to this bit…
    Camera.Parameters parameters = cam.getParameters();
    parameters.setPreviewSize(3264×2448); // for 8 megapixels
    // the rest of the code…

    • Tim
    • June 25th, 2011 6:11am

    by which i meant (3264,2448)

    • redrum008
    • July 27th, 2011 1:35pm

    what type of object is PImage gBuffer?

  7. It’s a… PImage object :)
    It is filled with the return from the createImage() command:

    • Nima Motamedi
    • August 28th, 2011 12:54pm

    Hey, thanks for this effort!

    If the camera is 5 megapixel, is your code converting every pixel from the camera? If so, this is overkill and I think the reason why it’s slow. If the LCD display has 480 x 800 resolution, is there a way to downsample the byte stream from the camera first, then do the RGB converting??


  8. I believe this is grabbing the camera *preview* data, the “smaller” image size the camera displays in realtime on the display surface (based on your example, in 480×800, which is like .3 megapixels) before the “high-res” image is actually taken. lol, and it’s still slow 😛

    • Nima Motamedi
    • August 29th, 2011 4:34am

    Ok, there goes my theory 😛

    • Reto
    • January 1st, 2012 3:44pm

    Got the code example working on my 3.2 Medion P9514 tab and it feels great for a noob like me having installed the SDK and Processing 2a today!!!
    It worked without any (or better say “ususal”) frame delay after after I’ve:
    – changed renderer to P2D
    – checked CAMERA as well as READ_FRAME_BUFFER in the permissions
    Thanks for the hints.

    • Josh
    • February 6th, 2012 7:52pm

    I’ve tried running this sketch on my Galaxy Nexus with no luck. No errors, but all I get is a white screen. Any ideas?

  9. I don’t off hand. When you only have one piece of hardware to test, and it works, it makes it hard to troubleshoot others. It’s possible too that the underlying API code has changed since I authored it (on Android 2.2), but that’s just a guess.

    • Josh
    • February 8th, 2012 2:20pm

    Ya I believe that’s the reason. I tried other camera apps that were made for previous versions of Android with the same result. I implemented my own that works ok but I can’t seem to get the aspect ratio right. Also when I take a picture it saves it to internal memory but doesn’t diplay it in the gallery until like a week later lol. Still working on it right now.

    • ggdv
    • February 12th, 2012 11:24am

    I have problem to run this code, this is the error log,
    could someone help me

    java.lang.RuntimeException: Fail to connect to camera service
    at android.hardware.Camera.native_setup(Native Method)
    at android.hardware.Camera.(
    at changethispackage.beforesubmitting.tothemarket.camara.Camara$CameraSurfaceView.surfaceCreated(
    at android.view.SurfaceView.updateWindow(
    at android.view.SurfaceView.access$000(
    at android.view.SurfaceView$3.onPreDraw(
    at android.view.ViewTreeObserver.dispatchOnPreDraw(
    at android.view.ViewRoot.performTraversals(
    at android.view.ViewRoot.handleMessage(
    at android.os.Handler.dispatchMessage(
    at android.os.Looper.loop(
    at java.lang.reflect.Method.invokeNative(Native Method)
    at java.lang.reflect.Method.invoke(
    at dalvik.system.NativeStart.main(Native Method)

    • Tim
    • February 13th, 2012 12:49am

    Remember to go to sketch permissions and enable the camera.

    • Subhendu
    • February 24th, 2012 6:40pm

    I am getting the following error even after doing changes in Android Manifest and sketches. Can someone tell me.

    [javac] location: class changethispackage.beforesubmitting.tothemarket.android_camera.Android_camera
    [javac] public String sketchRenderer() { return A2D; }
    [javac] ^
    [javac] 1 error

    F:\Android-SDK\android-sdk-windows\tools\ant\build.xml:602: The following error occurred while executing this line:
    F:\Android-SDK\android-sdk-windows\tools\ant\build.xml:622: Compile failed; see the compiler error output for details.

    Total time: 4 seconds

    • Frankensound
    • May 30th, 2012 8:22pm

    works here on Samsung Galaxy SII after enabling camera permissions and removing A2D in setup. Any ideas for accessing the front camera also?

  10. lol, I authored that code before front cams existed on android phones. I presume it can be done, but don’t personally have a phone that supports it 😉

    • Dr. Smith
    • September 25th, 2012 4:55pm


    I’m a science teacher, fascinated by your camera code. I’d love to develop an App for my students that allows them to take images of their data, and then analyze it. I have code to analyze the data, and it works. Unfortunately, I know little about programming and capturing images (..and I’m terrified about asking questions on blogs like this). I’m having trouble with the import library statements at the beginning of your code. Based on what I have read, this is trivial to you and your followers. I, however, am clueless.

    No matter what I try- the libraries are not found. I understand that in Processing the libraries have to be in the sketch folder, but I don’t know what to put in there.

    The error messages I get are:
    Problem moving Camera to build folder.
    No library found for android.content, No library found for blah, blah…

    Any assistance you can offer is greatly appreciated.

    Dr. Smith

  11. No worries :) This camera code was some of the more complex stuff I had to deal with on the phone. It’s going to be two years old soon, and I’ve not touched it in a long time, so there is a good possibility that based on the latest versions of Processing, and the version of phone you’re using, it’s no longer entirely valid.

    All that being said, my guess is that if you can’t import those libraries, you don’t have the Android SDK installed? I blogged about the steps I went though years ago here:
    And this is the main Processing page on how to get that stuff going:

    And generally, I’ve found the Processing forums to be a very friendly place:
    So if you can’t figure out what’s going on from those above posts, try the forum.

    If you do get it working, it’d be great to hear about what you come up with :)

    Good luck!

    • Dr. Smith
    • September 26th, 2012 5:56am

    I’ve successfully developed an image analysis App using Processing and I am able to use it on my phone for an image I’ve generated, so I don’t think it is an issue with the Android SDK. My App would be a great tool for my students if they are able to capture an image with their camera, and then analyze the data. I also understand that your code may not work on my phone (or my students), but I want to try. I’ll keep plugging away, but best I can tell, the trouble is finding these libraries.

    Dr. Smith

    • Serge
    • September 27th, 2012 11:29pm


    Hi; I had the same problem (white screen after startup). I managed to fix it by adding the following code after “cam.setPreviewCallback(this);” in the void surfaceCreated:

    catch (Exception e) { }

    Works like a charm now! I think the FPS is also imprved by adding

    • Serge
    • September 27th, 2012 11:31pm

    … sorry, pressed enter which submitted the post.. =]

    I think the FPS is also imprved by adding


    directly after:

    cam =;


    • Dr. Smith
    • October 1st, 2012 6:14am

    Got it working!! Too many issue had to be resolved, not worth going into. I’ll keep in touch to let you know what I came up with.

    BTW, love all the stuff you have posted and thanks for the quick response.

    Dr. Smith

    • Victor
    • October 4th, 2012 1:10pm

    Thanks for the code and discussion. I was trying to run it on my Nexus S Jelly Bean (with Processing 2.0.3) but all I got was a gray screen… I tried the try-throw block suggested by Serge, I got a flash of what the camera sees, and the app crashed with this exception shown in the Processing IDE:

    java.lang.RuntimeException: Image width and height cannot be larger than 0 with this graphics card.

    It seems that there is something wrong in the onPreviewFrame callback method but I have no idea what actually happened…

    Did anyone have the same problem and figured out how to solve that?

    • Brendan
    • November 25th, 2012 8:34pm

    Great code. Does anyone know how to store the camera footage data so it could be viewed and altered on a computer?

    • MaxMax
    • February 3rd, 2013 8:20am

    Hello AKeric and everybody,

    Thank you for the precious introduction.

    My Question is:
    If I need just rgb values of specific points on the image:
    How did I get the rgb value (in 8bit per channel (0-255)) of pixel in chosen x,y coordinates?

    like this::::

    //in the onPreviewFrame method:

    //And in the decodeYUV420SP method:
    //if I divide just through 1028?

    int decodeYUV420SP () {

    // code as above

    int r = Math.floor((y1192 + 1634 * v) /1028);
    int g = Math.floor((y1192 – 833 * v – 400 * u)/1028);
    int b = Math.floor((y1192 + 2066 * u)/1028);

    // after that if r>255 than 255…

    // and then return r,g,b


    • pauline
    • April 4th, 2013 3:36am

    Hi, would it be possible to implement this library ( with your sketch ?
    I wouldn’t have need the decompression I guess, just the coordinates of blobs, which would be sent via osc to my main processing application.

    • Dr Smith
    • September 13th, 2013 9:41am

    I tried Serge’s try/catch code and it works on my new phone. However, it seems to be stuck in preview mode, and won’t execute anything else in Processing. Any suggestions??

    • George Triantafyllakos
    • November 7th, 2013 12:30am

    Excellent code. Thank you, it helped me a lot.
    Is there a way to start and stop the camera with mousePressed?
    How can I call cam.startPreview() and cam.stopPreview() outside of the class?

    • Linda
    • January 7th, 2016 6:55pm

    To turn it faster, you don’t need draw image on a screen. Delete this part and enjoy your full fps.

    • Senthil
    • November 10th, 2016 3:00am

    Hi Akeric, I’m looking for some image processing algorithm at receiver end once the user focus the light source with Visible Light Communication process. I had a source from the below pdf document where the receiver side ndk code provided. .

    From this document, the ndk code part at receiver end is incomplete. If any chance, compare your logic with that pdf document code at receiver end c++ code and give me your valuable comments on the same. It’s incomplete, can you suggest the logic to make it complete ndk code.

  12. @Senthil
    Personally I won’t be of any help to you, but maybe someone else will be 😉