Archive for the ‘ CG ’ Category

Android Adventures, part 5: access the camera in Processing

Go to part 4…, part 6…

This became far more difficult than anything previous I’d tried to do with the hardware :)

I thought it would be a simple matter to access the camera’s pixel data in Processing, but that was not the case.   And I should point out I can’t take credit for everything below:  The camera passes back a byte stream encoded in YUV format, that my brain simply couldn’t\wouldn’t decode.  I’d already ran across the Ketai library before (here, & here), and discovered that they had written a YUV decoder function (since they’ve already completed this exercise I’m trying…), so my solution below uses a direct implementation of their code.  So a huge thank you to that project!

As well, I used concepts for my CameraSurfaceView class from examples in the book  Android Wireless Application Development, page 340.

At any rate, it works.  Camera pixel data is passed to Processing, and displayed as a PImage on the screen.  It’s not fast (1 fps?), which is a bit disappointing, but it’s a start!

/**
CameraPixelData
Eric Pavey - 2010-11-15

Set Sketch Permissions : CAMERA
Add to AndroidManifest.xml:
    uses-feature android:name="android.hardware.camera"
    uses-feature android:name="android.hardware.camera.autofocus"
*/

import android.content.Context;
import android.hardware.Camera;
import android.hardware.Camera.PreviewCallback;
import android.view.SurfaceHolder;
import android.view.SurfaceHolder.Callback;
import android.view.SurfaceView;
import android.view.Surface;

// Setup camera globals:
CameraSurfaceView gCamSurfView;
// This is the physical image drawn on the screen representing the camera:
PImage gBuffer;

void setup() {
  size(screenWidth, screenHeight, A2D);
}

void draw() {
  // nuttin'... onPreviewFrame below handles all the drawing.
}

//-----------------------------------------------------------------------------------------
//-----------------------------------------------------------------------------------------
// Override the parent (super) Activity class:
// States onCreate(), onStart(), and onStop() aren't called by the sketch.  Processing is entered
// at the 'onResume()' state, and exits at the 'onPause()' state, so just override them:

void onResume() {
  super.onResume();
  println("onResume()!");
  // Sete orientation here, before Processing really starts, or it can get angry:
  orientation(LANDSCAPE);

  // Create our 'CameraSurfaceView' objects, that works the magic:
  gCamSurfView = new CameraSurfaceView(this.getApplicationContext());
}

//-----------------------------------------------------------------------------------------
//-----------------------------------------------------------------------------------------

class CameraSurfaceView extends SurfaceView implements SurfaceHolder.Callback, Camera.PreviewCallback {
  // Object that accesses the camera, and updates our image data
  // Using ideas pulled from 'Android Wireless Application Development', page 340

  SurfaceHolder mHolder;
  Camera cam = null;
  Camera.Size prevSize;

  // SurfaceView Constructor:  : ---------------------------------------------------
  CameraSurfaceView(Context context) {
    super(context);
    // Processing PApplets come with their own SurfaceView object which can be accessed
    // directly via its object name, 'surfaceView', or via the below function:
    // mHolder = surfaceView.getHolder();
    mHolder = getSurfaceHolder();
    // Add this object as a callback:
    mHolder.addCallback(this);
  }

  // SurfaceHolder.Callback stuff: ------------------------------------------------------
  void surfaceCreated (SurfaceHolder holder) {
    // When the SurfaceHolder is created, create our camera, and register our
    // camera's preview callback, which will fire on each frame of preview:
    cam = Camera.open();
    cam.setPreviewCallback(this);

    Camera.Parameters parameters = cam.getParameters();
    // Find our preview size, and init our global PImage:
    prevSize = parameters.getPreviewSize();
    gBuffer = createImage(prevSize.width, prevSize.height, RGB);
  }  

  void surfaceChanged(SurfaceHolder holder, int format, int w, int h) {
    // Start our camera previewing:
    cam.startPreview();
  }

  void surfaceDestroyed (SurfaceHolder holder) {
    // Give the cam back to the phone:
    cam.stopPreview();
    cam.release();
    cam = null;
  }

  //  Camera.PreviewCallback stuff: ------------------------------------------------------
  void onPreviewFrame(byte[] data, Camera cam) {
    // This is called every frame of the preview.  Update our global PImage.
    gBuffer.loadPixels();
    // Decode our camera byte data into RGB data:
    decodeYUV420SP(gBuffer.pixels, data, prevSize.width, prevSize.height);
    gBuffer.updatePixels();
    // Draw to screen:
    image(gBuffer, 0, 0);
  }

  //  Byte decoder : ---------------------------------------------------------------------
  void decodeYUV420SP(int[] rgb, byte[] yuv420sp, int width, int height) {
    // Pulled directly from:
    // http://ketai.googlecode.com/svn/trunk/ketai/src/edu/uic/ketai/inputService/KetaiCamera.java
    final int frameSize = width * height;

    for (int j = 0, yp = 0; j < height; j++) {       int uvp = frameSize + (j >> 1) * width, u = 0, v = 0;
      for (int i = 0; i < width; i++, yp++) {
        int y = (0xff & ((int) yuv420sp[yp])) - 16;
        if (y < 0)
          y = 0;
        if ((i & 1) == 0) {
          v = (0xff & yuv420sp[uvp++]) - 128;
          u = (0xff & yuv420sp[uvp++]) - 128;
        }

        int y1192 = 1192 * y;
        int r = (y1192 + 1634 * v);
        int g = (y1192 - 833 * v - 400 * u);
        int b = (y1192 + 2066 * u);

        if (r < 0)
           r = 0;
        else if (r > 262143)
           r = 262143;
        if (g < 0)
           g = 0;
        else if (g > 262143)
           g = 262143;
        if (b < 0)
           b = 0;
        else if (b > 262143)
           b = 262143;

        rgb[yp] = 0xff000000 | ((r << 6) & 0xff0000) | ((g >> 2) & 0xff00) | ((b >> 10) & 0xff);
      }
    }
  }
}

Go to part 4…, part 6…

Android Adventures, part 4: vibrate Processing

Go to Part 3… Part 5…

Over the weekend I had time to dig into more Android / Processing, and decided I should learn how to trigger the phone’s vibration function.

The only thing I found out of the ordinary is that you don’t need to include this code in your AndroidManifest.xml file:

  • <uses-permission android:name="android.permission.VIBRATE" />

But you do need to enable the ‘VIBRATE’ option in the ‘Android -> Sketch Permission’ menu.

In the below sketch, it simply vibrates when you touch the screen.

// Vibrate Android via Processing
// Eric Pavey - www.akeric.com - 2010-10-24
// You must enable VIBRATE in Android -> Sketch Permissions menu!!!

// Imports:
import android.content.Context;
import android.app.Notification;
import android.app.NotificationManager;

// Setup vibration globals:
NotificationManager gNotificationManager;
Notification gNotification;
long[] gVibrate = {0,250,50,125,50,62};

void setup() {
  size(screenWidth, screenHeight, A2D);
}

void draw() {
  // do nothing...
}

//-----------------------------------------------------------------------------------------
// Override the parent (super) Activity class:
// States onCreate(), onStart(), and onStop() aren't called by the sketch.  Processing is entered
// at the 'onResume()' state, and exits at the 'onPause()' state, so just override them as needed:

void onResume() {
  super.onResume();
  // Create our Notification Manager:
  gNotificationManager = (NotificationManager) getSystemService(Context.NOTIFICATION_SERVICE);
  // Create our Notification that will do the vibration:
  gNotification = new Notification();
  // Set the vibration:
  gNotification.vibrate = gVibrate;
}

//-----------------------------------------------------------------------------------------
// Override the parent (super) SurfaceView class to detect for touch events:

public boolean surfaceTouchEvent(MotionEvent event) {
  // If user touches the screen, trigger vibration notification:
  gNotificationManager.notify(1, gNotification);
  return super.surfaceTouchEvent(event);
}

Go to Part 3… Part 5…

Android Adventures, part 3: hardware detection in Processing

Go to part 2, part4…

The main reason I got my Android phone was to get Processing running on it, and start grabbing the sensor data.  There were no complete examples  (I could find) on how to actually do this though.  After two days of research I got it working.  Some takeaways:

  • I learned that Processing is ran as an Android “Activity“:  It enters the Activity at the onResume() state, and exits it at the onPause() state.  You need to override these in your sketch to do what you want.   See the above link to a nice image that shows the state tree.
  • All the code you need to author to setup your SensorManagers, SensorEventListeners, and Sensors,  needs to happen in the onResume() function:  Putting this stuff in the sketch’s setup() function won’t work.
  • You can make a single SensorManager, but for each sensor you want to track you need to make a unique SensorEventListener, and Sensor.
  • Once I figured it out I realized how easy it actually is.  Like most things 😉

Here is a list of resources I pulled from, in order of discovery/usefulness:

A (cropped) screenshot off the Phone of the exciting final product!

And the code:

// android_sensorData
// Eric Pavey - 2010-10-10
// http://www.akeric.com
//
// Query the phone's accelerometer and magnetic field data, display on screen.
// Made with Android 2.1, Processing 1.2

//-----------------------------------------------------------------------------------------
// Imports required for sensor usage:
import android.content.Context;
import android.hardware.Sensor;
import android.hardware.SensorEvent;
import android.hardware.SensorManager;
import android.hardware.SensorEventListener;

//-----------------------------------------------------------------------------------------
// Screen Data:
float sw, sh;
// Font Data:
String[] fontList;
PFont androidFont;

// Setup variables for the SensorManager, the SensorEventListeners,
// the Sensors, and the arrays to hold the resultant sensor values:
SensorManager mSensorManager;
MySensorEventListener accSensorEventListener;
MySensorEventListener magSensorEventListener;
Sensor acc_sensor;
float[] acc_values;
Sensor mag_sensor;
float[] mag_values;

//-----------------------------------------------------------------------------------------

void setup() {
  size(screenWidth, screenHeight, A2D);
  sw = screenWidth;
  sh = screenHeight;
  // Set this so the sketch won't reset as the phone is rotated:
  orientation(PORTRAIT);
  // Setup Fonts:
  fontList = PFont.list();
  androidFont = createFont(fontList[0], 16, true);
  textFont(androidFont);
}

//-----------------------------------------------------------------------------------------

void draw() {
  fill(0);
  rect(0,0,sw,sh);
  fill(255);
  if (acc_values != null) {
    text(("Accelerometer: " + acc_values[0] + " " + acc_values[1] + " " + acc_values[2]), 8, 20);
  }
  else {
    text("Accelerometer: null", 8, 20);
  }
  if(mag_values != null) {
    text(("Magnetic Field: " + mag_values[0] + " " + mag_values[1] + " " + mag_values[2]), 8, 40);
  }
  else {
    text("Magnetic Field: null", 8, 40);
  }
}

//-----------------------------------------------------------------------------------------
// Override the parent (super) Activity class:
// States onCreate(), onStart(), and onStop() aren't called by the sketch.  Processing is entered at
// the 'onResume()' state, and exits at the 'onPause()' state, so just override them:

void onResume() {
  super.onResume();
  println("RESUMED! (Sketch Entered...)");
  // Build our SensorManager:
  mSensorManager = (SensorManager)getSystemService(Context.SENSOR_SERVICE);
  // Build a SensorEventListener for each type of sensor:
  magSensorEventListener = new MySensorEventListener();
  accSensorEventListener = new MySensorEventListener();
  // Get each of our Sensors:
  acc_sensor = mSensorManager.getDefaultSensor(Sensor.TYPE_ACCELEROMETER);
  mag_sensor = mSensorManager.getDefaultSensor(Sensor.TYPE_MAGNETIC_FIELD);
  // Register the SensorEventListeners with their Sensor, and their SensorManager:
  mSensorManager.registerListener(accSensorEventListener, acc_sensor, SensorManager.SENSOR_DELAY_GAME);
  mSensorManager.registerListener(magSensorEventListener, mag_sensor, SensorManager.SENSOR_DELAY_GAME);
}

void onPause() {
  // Unregister all of our SensorEventListeners upon exit:
  mSensorManager.unregisterListener(accSensorEventListener);
  mSensorManager.unregisterListener(magSensorEventListener);
  println("PAUSED! (Sketch Exited...)");
  super.onPause();
} 

//-----------------------------------------------------------------------------------------

// Setup our SensorEventListener
class MySensorEventListener implements SensorEventListener {
  void onSensorChanged(SensorEvent event) {
    int eventType = event.sensor.getType();
    if(eventType == Sensor.TYPE_ACCELEROMETER) {
      acc_values = event.values;
    }
    else if(eventType == Sensor.TYPE_MAGNETIC_FIELD) {
      mag_values = event.values;
    }
  }
  void onAccuracyChanged(Sensor sensor, int accuracy) {
    // do nuthin'...
  }
}

Go to part 2, part4…

Android Adventures, part 2

Go to part 1, part3…

Based on the success of my previous post covering how I got my Android phone (Samsung Captivate) rooted, the “Scripting Language For Android” installed with Python, it was time to get Processing working on it.

Of course, it wouldn’t go easy or smoothly…

A.  Install Processing For Android & Dependencies

Following the ‘Instructions‘ sections on the “Processing For Android” page, I installed:

B.  Try Out Some Code

I started with these two tutorials, trying to walk through them to get the code working:

I immediately ran into problems…

C. Troubleshooting:

First problem:

Couldn’t get the Android emulator to run any of the example code.  The first  error I got  started like this:

Importing rules file: tools\ant\ant_rules_r3.xml
BUILD FAILED
G:\android-sd\tools\ant\ant_rules_r3.xml:336: Error running javac.exe compiler

Websearching, I found someone else with the exact problem here.  On that thread a fix was posted which linked here.  Looking at that suggestion, I ended up doing these actions, which fixed the emulator problem:

  • I added all of these dirs to my PATH system variable (Windows XP):
    • C:\android-sdk-windows;
    • C:\android-sdk-windows\tools;
    • C:\Program Files\Java\jdk1.6.0_21\bin;
  • I added a ‘ANDROID_SDK’ system variable, pointing it to ‘C:\android-sdk-windows’ dir.
  • I had also previously added a ‘SDK_ROOT’ system variable pointing to ‘C:\android-sdk-windows\tools’ dir.

I should point out that maybe not all of those steps needed to be done, but after I did it all, I could finally get my sketches to load on the Android Emulator.

Second Problem:

When I tried to load the sketches on the phone itself via “Presentation Mode” in Processing, I got a new error:

“Device time out”
“Device killed or disconnected”

The computer wasn’t seeing the phone.  In a nutshell:  I had to switch the phone’s USB mode to “Enable USB Debugging”.  By doing that, the computer once again failed to recognize the phone  since  the phone seemed to be in a ‘new usb mode’.  Unplugging it and reconnecting it asked for me to reinstall the USB drivers again, twice in a row.  But once the drivers updated, I was able to get the sketches loaded on the phone.  Success!

D.  Try Out Some Code (Again)

I was then able to go through both the above tutorials getting the code running on both the emulator, and on my phone itself.  The examples are simple, but they look amazing on that screen.

I can’t wait to start writing some real code and getting it up on the phone.  And I still need to figure out how to actually save it on the phone, and run it later… 😉

Update:

So, the next day I found out that Processing installs the sketches on the phone when you run them.  I found them sitting in the Applications menu, named after the sketch that made them.  I just hadn’t noticed when I initially loaded them.   Nice!

Go to part 1, part3…

Speech 2 Text 2 Speech

Hot off the heels of my previous post on getting Python scripting on my Android phone via the “Scripting Layer For Android“, I wrote my first (silly) Python module directly on the phone:  I’ve coined it ‘Speech 2 Text 2 Speech‘:  You speak into the phone, it converts it to text, then says it back.  And, it’s sooo easy to do:

# speech2text2speech.py
import android
droid = android.Android()
speech = droid.recognizeSpeech("Talk Now", None, None)
print speech[1]
droid.ttsSpeak(speech[1])

Sometimes the “Talk Now” window pops up too fast and you have to reset it, but when it works it’s pretty funny to hear what you said repeated back in ‘Androidish’ 😉