Posts Tagged ‘ Python

Time-lapse photography with the Raspberry Pi

Thought it would be fun to setup a time-lapse rig with my Raspberry Pi & it’s camera, having never tried that type of photography before.  A bit of afternoon coding, and success:

11 hours compressed to 60 seconds.

This is what the camera-rig looks like:

SAMSUNG

Used some MircoRax to create a simple frame for the Pi & its camera.

Install dependencies

You can download my Python time-lapse code here.   For the below examples, just stick it in your home (~) folder.

It calls to the fantastic picam library.  Install:

sudo pip install picamera

Record the stills

Executing the timelapse.py code is easy:  It will create a /time-lapse subfolder where it will place all the jpgs.  It doesn’t need any arguments to run:  In that case, it will record for an hour, with enough images to make a one-minute long video at 30fps:  It’s designed to take the guesswork out of trying to figure out how many frames to render at what interval based on the framerate.  It handles it all for you behind the scenes. Plus it’s all configurable.  To query the help:

$ python timelapse.py -h
usage: timelapse.py [-h] [-ct float] [-dur int] [-fps int] [-xres int]
                    [-yres int] [-q int] [-y int] [-m int] [-d int] [-hr int]
                    [-min int] [-s int]

Time for time-lapse! To start recording at a certain time, pass in any or all
of the time related args. If no time-related args are passed in, recording
will start immediately.

optional arguments:
  -h, --help            show this help message and exit
  -ct float, --captureTime float
                        in HOURS, default 1.0
  -dur int, --duration int
                        of final movie in SECOMDS, default 60
  -fps int, --framesPerSecond int
                        of final movie (default 30)
  -xres int, --Xresolution int
                        of image (default 1280)
  -yres int, --Yresolution int
                        of image (default 720)
  -q int, --quality int
                        of jpeg from 1-100 (default 85)
  -y int, --year int    ...to start recording
  -m int, --month int   ...to start recording
  -d int, --day int     ...to start recording
  -hr int, --hour int   ...to start recording
  -min int, --minute int
                        ...to start recording
  -s int, --second int  ...to start recording

So for example, to capture for 12 hours, and end up with a 1 minute long video:

python timelapse.py -ct 12.0 -dur 60

It also supports a delayed start, if you pass in any of the time values.  For example, if you pass in an hour, it will wait for that hour to start recording.   If you pass in a minute, it’ll wait for that minute of the current hour, etc.  You can pass in any of the year, month, day, hour, minute, second, or none.  If none, it starts capturing immediately.

Finally, I’ve learned that if you’re logging in via ssh, you should launch your code via nohup:

nohup python timelapse.py -ct 12.0 -dur 60

If you don’t do that, when you close the remote shell, it’ll kill the process, and no timelapse for you!

Make a movie

After you capture all the stills, how to make into a movie?  The mencoder software can be used on the pi for that.  I found a tutorial here that provides a solution.  To install:

sudo apt-get install mencoder

First make a list of files from your /time-lapse folder (from the above tutorial link):

cd time-lapse
ls *.jpg > stills.txt

Then, to convert them into a movie with mencoder (modified version of the above example):

mencoder -nosound -ovc lavc -lavcopts vcodec=mpeg4:aspect=16/9:vbitrate=8000000 -o tlcam_01.avi -mf type=jpeg:fps=30 mf://@stills.txt

Copy to your PC

This will create a new avi file on the Pi.  To get that moved to your PC, on Mac/Linux you can use scp (below example is my Pi’s IP, change it to match yours).  Note, the below code is executed from your PC, not the Pi, and copies it to my Mac’s home folder:

scp pi@192.168.2.27:~/time-lapse/tlcam_01.avi ~/tlcam_01.avi

Or you can use this great tutorial on how to use SFTP via FileZilla, if you’re more comfortable in a windowed environment.

Once I got my first movie copied over, I couldn’t play it (on my Mac) via the Quicktime player.  However, my install of VLC opened it no problem.  From there it was uploaded to YouTube: Done!

Scrolling the Adafruit 16×2 LCD+Keypad

I wanted to add additional functionality to my Raspberry FM project by having any long station names \ song names scroll on the Adafruit 16×2 LCD+keypad for Raspberry Pi: While the lcd Python module Adafruit provides has methods to scroll the text (scrollDisplayLeft, scrollDisplayRight, autoScroll), I was unable to get them to work nor find any good examples.  Maybe it’s completely possible with what they provide, but I had no luck with it.  If anyone does know how, please comment! :)

Why not write my own?  That’s a fun thing to do on a Saturday afternoon, right? 😉  Find a snapshot of the Python source below, and a link to the most current version on Bitbucket here: lcdScroll.py

To see it in action:

It’s a standalone module I designed to work with any sized lcd:  It’s really just a text formatter:  All the drawing to the LCD would be handled by some other application.  A very simple example can also be found on Bitbucket here: lcdScrollTest.py Or of course you could check out the Raspberry FM source here:  raspberryFm01.py

For either the top or bottom line, if they are longer than 16 characters (which is completely adjustable based on the type of lcd used), they will auto-scroll.  If less than 16 characters, no scrolling happens.  So you can have completely independent scrolling on any line based on their length.

#!/usr/bin/python
"""
lcdScroll.py
Author             :  Eric Pavey 
Creation Date      :  2014-02-08
Blog               :  http://www.akeric.com/blog

Free and open for all to use.  But put credit where credit is due.

OVERVIEW:-----------------------------------------------------------------------
Create scrolling text on a LCD display.  Designed to work on the the 
Adafruit LCD  + keypad, but it's not tied to any specific hardware, and should
work on a LCD of any size.

See lcdScrollTest.py for simple example usage.
"""

class Scroller(object):
    """
    Object designed to auto-scroll text on a LCD screen.  Every time the scroll()
    method is called to, it will scroll the text from right to left by one character
    on any line that is greater than the provided with.
    If the lines ever need to be reset \ updated, call to the setLines() method.
    """
    def __init__(self, lines=[], space = " :: ", width=16, height=2):
        """
        Instance a LCD scroller object.

        Parameters:
        lines : list / string : Default empty list : If a list is passed in, each 
            entry in the list is a  string that should be displayed on the LCD, 
            one line after the next.  If a string, it will be split by any embedded 
            linefeed \n characers into a list of multiple lines . 
            Ultimately, the number of entries in this list must be equal to or 
            less than the height argument.
        space : string : Default " :: " : If a given line is longer than the width
            argument, this string will be added to the end to help designate the
            end of the line has been hit during the scroll.
        width : int : Default 16 : The width of the LCD display, number of columns.
        height : int : Default 2 : the height of the LCD, number of rows.
        """
        self.width = width
        self.height = height
        self.space = space
        self.setLines(lines)

    def setLines(self, lines):
        """
        Set (for the first time) or reset (at any time) the lines to display.
        Sets self.lines

        Parameters:
        lines : list : Each entry in the list is a string
            that should be displayed on the LCD, one line after the next.  The 
            number of entries in this list must be equal to or less than the 
            height argument.
        """
        # Just in case a string is passed in, turn it into a list, and split
        # by any linefeed chars:
        if isinstance(lines, basestring):   
            lines = lines.split("\n")
        elif not isinstance(lines, list):
            raise Exception("Argument passed to lines parameter must be list, instead got: %s"%type(lines))
        if len(lines) > self.height:
            raise Exception("Have more lines to display (%s) than you have lcd rows (%s)"%(len(lines), height))            
        self.lines = lines
        # If the line is over the width, add in the extra spaces to help separate
        # the scroll:
        for i,ln in enumerate(self.lines[:]):
            if len(ln) > self.width:
                self.lines[i] = "%s%s"%(ln,self.space)

    def scroll(self):
        """
        Scroll the text by one character from right to left each time this is
        called to.

        Return : string : The message to display to the LCD.  Each line is separated
            by the \n (linefeed) character that the Adafruit LCD expects.  Each line
            will also be clipped to self.width, so as to not confuse the LCD when
            later drawn.
        """
        for i,ln in enumerate(self.lines[:]):
            if len(ln) > 16:
                shift = "%s%s"%(ln[1:], ln[0])
                self.lines[i] = shift
        truncated = [ln[:self.width] for ln in self.lines]
        return "\n".join(truncated)

Raspberry FM Part 2, the sequal

Nearly a year ago I wrapped up a project I called “Raspberry FM“:  A Raspberry Pi based internet radio streamer coupled with a MaKey MaKey as the interface.   Why the name Raspberry FM?  It’s a mashup of a Raspberry Pi, and my favorite internet radio station, Soma FM.   It worked, but had a few problems:

  • There was no visual feedback, the MaKey MaKey was purely an input.
  • I was having a hard time getting Python to interface with with MPlayer (probably due to my own ignorance), the audio player I had chosen, so everything was done via bash.

Fast forward nearly a year:  After learning about the Mpd audio player/server, and (one of) its clients Mpc, it re-piqued my interest in programming a Python app to play music on the Pi.  Second time around, it all turned out really well:  New software combined with new hardware and a 3d-printed case turned it into a good looking compact unit.

Overview of the whole process below (pic on top, video on bottom).  During the development I decided to write my own Python music player around mpc & Adafruit’s LCD library.  At this point there are several others online, but I enjoyed coding it from scratch.

adaLcdCase

Raspberry FM Features:

  • Auto-on with the Pi
  • If no internet is present or it drops out, app will go into holding pattern until it returns.
  • Can change between any number of stations. (left\right buttons).
  • Stations are easy to add via the commandline:  No need to update Python source: SSH into the Pi, and add\remove what you need.
  • Increase\decrease volume (up\down buttons).
  • Station and song info is displayed and auto-scrolls.
  • Shutdown Pi  (select+left) or turn off program (for debugging, select+right)
  • Lots of color changing when buttons are pressed!

Hardware needed:

  • Raspberry Pi (I used a B model)
  • Adafruit RGB 16×2 LCD+Keypad Kit : Solder and install!
  • Optional:  Custom 3D printed case I designed (well, I designed the top part), download on Thingiverse.  Print & install!  Took me about an 1:20 on my Makerbot Replicator (1).
  • I stream the internet radio over cat5, but I’ve also had success with wifi.
  • I use the headphone jack for audio out.

Software needed:

  • This was all done on the Raspbian distro via NOOBS.
  • My “Raspberry FM” Python program.  Find on Bitbucket.
  • You’ll need Adafruits CharLCDPlate library.
  • FYI, I coded this all via the Adafruit WebIDE, I’d recommend anyone else to use it as well to help manage the various Python modules on the Pi.
  • MPD & MPC:  sudo apt-get install mpc mpd

Steps:

  • I presume you already have your Pi setup.  If not, see my notes here on the general steps needed to get a Pi live and kicking.
  • Setup Pi to auto-login.  See notes here.
  • Download the Raspberry FM Python program to a folder of your choosing.  Since I coded this via the WebIDE, both the creation of my code and the integration of the Adafruit LCD modules was all handled via the WebIDE.  Make sure you download all the Adafruit CharLCDPlate modules as well and put them in the same directory.
  • Install MPD & MPC.
  • Add stations to MPC.  This is super easy on the commandline.  May I recommend anything from SomaFM?
    mpc add http://ice.somafm.com/groovesalad
  • Setup Pi to auto-run a program on start.  See notes here.  You will point that script to wherever you saved the Raspberry FM Python script.  For example, my startup.sh script looks like:
  • #!/bin/bash
    echo startup.sh : Launching raspberryFm.py
    sudo python /usr/share/adafruit/webide/repositories/my-pi-projects/Adafruit_CharLCDPlate/raspberryFm01.py
  • Restart the Pi and listen to the music!

The final result in action:

Control your Arduino via Python with your Raspberry Pi

I recently ran across the nanpy library for Python when looking at different internet radio projects using the Raspberry Pi.  It allows you to easily control your Arduino from Python, and it installs on the Pi in a snap:

First, install Arduino:

$ sudo apt-get install arduino

Next, install the nanpy source:  This is needed to later build the new Arduino firmware:

$ cd ~
$ curl -O https://pypi.python.org/packages/source/n/nanpy/nanpy-v0.8.tar.gz
$ tar xvf nanpy-v0.8.tar.gz
$ rm nanpy-v0.8.tar.gz

Now install the required Python libs:

$ sudo pip install nanpy
$ sudo pip install pyserial

Hook up your Arduino to one of the Pi’s USB ports, and create/upload the new firmware (using an Arduino Uno as an example):

$ cd ~/nanpy/firmware
$ export BOARD=uno
$ make
$ make upload

From there, programming my Pi via the Adafruit WebIDE (and with my Arduino hooked up to a breadboard with the required led’s and resistors),  I recreated a few basic Arduino sketches to see how it worked.  It worked as expected, simple and easy.

Here is a port of the basic Arduino Blink sketch:

from nanpy import Arduino as A
led = 13

# SETUP:
A.pinMode(led, A.OUTPUT)

# LOOP:
while True:
    A.digitalWrite(led, A.HIGH); # turn the LED on (HIGH is the voltage level)
    print "blink on"
    A.delay(1000); # wait for a second
    A.digitalWrite(led, A.LOW); # turn the LED off by making the voltage LOW
    print "blink off"
    A.delay(1000);

And a port of the basic Fade sketch:

from nanpy import Arduino as A
led = 9
brightness = 0 
fadeAmount = 5

# SETUP:
A.pinMode(led, A.OUTPUT)

# LOOP:
while True:
    # set the brightness of pin 9:
    A.analogWrite(led, brightness)
    # change the brightness for next time through the loop:
    brightness += fadeAmount
    # reverse the direction of the fading at the ends of the fade: 
    if brightness == 0 or brightness == 255:
        fadeAmount = -fadeAmount
    # wait for 30 milliseconds to see the dimming effect 
    A.delay(30)

Only a few core libraries are currently supported.  To see the list, you can visit this page.

On a side note I should point out I went to great lengths to get this working on my Mac, without a lot of success.  You can check out my Python Wiki post on it, under the “Mac Notes” section.

4WOC : Week 3

4WOC: Week 3

This post will follow my next 7 days of creativity.  Back to Week 2, forward to Week 3

Day 21: Sunday, December 1st, 2013 :

Scanning Printers

I picked up a used Xbox 360 Kinect at the local Game Stop for $50 & downloaded the trial version of Skanect (for Mac).  I have this idea in my head of scanning body parts (arms, heads, feet), drawing 3d mesh on them, and then printing the result to create cool masks\shoes\etc.  For my first attempt, I thought it’d be funny to scan my Makerbot Replicator:

I don’t think my Macbook Air is cut out for this heavy kind of processing:  My attempts to scan the whole thing, 360 degs, would crash the software repeatedly.  I could get away with scanning just the front and sides, but unless I dropped the reconstruction option to “low”, it would crash as well.  I have yet to figure out if I can get GPU processing enabled… currently it’s disabled despite the fact I have a GeForce 320m card in my laptop.  The free version only outputs mesh with 5000 tri’s… but I’m not sure if I want to drop $129 on a piece of software that crashes often (I’m guessing due to my hardware setup) and may not do what I want (questionable short range resolution).  Unfortunately it’s about the only option I’m aware of for Mac that uses the Kinect.

Day 20: Saturday, November 30th, 2013:

Was out of town on trip.  Limited creativity.  Unless you count wine tasting creative :)

Day 19: Friday, November 29th, 2013

The end of Kivy:  I successfully got an app working where I click on screen, and it draws circles with a texture with a tinted color.  Way to much effort unfortunately.  Don’t get me wrong:  I’m sure if I was writing a full-blown mobile app I would continue to be more enthused:  It has a great 2d widget library.  But just for throwing a lot of sprites on screen, I’m just not getting it.  Probably based on my own ignorance of the language, but to knock that stuff out in Processing or PyGame just seems way easier.  I could post a photo but well… it’s just sad.

Day 18 : Thursday, November 28th, 2013

Thanksgiving.  I creatively ate some Turkey.

Day 17 : Wednesday, November 27th, 2013

Nervous Printing

 

While reading my feeds today, I ran across this one, by Nervous Systems: Kinematics.  I really enjoy and appreciate the designs\art\jewelry\code they create as I’ve followed them over the years.  I was immediately fascinated by their online “Kinematics App” (apparently written in JavaScript). That looked great, but only generates items for purchase.  I then tracked down the companion app “Kinematics@Home“, which allows you to generate simplified designs, and they provide you with a stl file for download.  Seemed perfect to fill todays ‘creative slot’.

Making the design was easy, and downloading the stl went without a hitch.  I dropped it into Makerware, and in 1hr 22min later of printing on my Replicator, I had a functional bracelet.

There were problems however:  Some of the mesh on the side with the clasp-peg didn’t print properly.  I’d seen a similar issue before, that happens when multiple shells (individual pieces of 3d mesh) intersect one another.  I quickly resolve, I uploaded the stl to Netfabb Cloud, and a few minutes later, I had a repaired stl file.  According to the repair report, the “bad” stl had 425 holes, and 381 separate shells.  The fixed version had no holes, and 45 shells:  The exact same number of segments the bracelet has.

You can download the stl for printing and get more info/picsover on Thingiverse.

Day 16 : Tuesday, November 26th, 2013

Even  More Kivy!

Although I really have nothing visual to show for it, I did a lot more Kivy learning : Trying to wrap my head around the general process flow when an application starts, and the main important built-in methods.  I also tried to draw a simple sprite… and didn’t have much success, I’m sure I’m missing something fundamental.

I’m starting to update my Python wiki with a Kivy section.

Total time:  about an hour and a half.

Day 15 : Monday, November 25th, 2013

More Kivy!

Yesterday (day 14) I got the latest version of Kivy installed.  I had ran into some problems however getting it to properly debug with Wing IDE, based on my Mac development environment.  I kept getting exceptions:

Traceback (most recent call last):
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/kivy/core/window/window_pygame.py", line 180, in set_icon
    im = pygame.image.load(filename)
error: File is not a Windows BMP file

So with a little bit of trickery I was able to modify the set_icon() function in window_pygame.py to this:

def set_icon(self, filename):
    im = None
    try:
        if not exists(filename):
            return False
        try:
            im = pygame.image.load(filename)
        except:
            pass
        if im:
            pygame.display.set_icon(im)
            super(WindowPygame, self).set_icon(filename)
    except:
        Logger.exception('WinPygame: unable to set icon')

More notes over on my Python Wiki, but boom:  Debugging in Wing successful.

Bolstered by that success, I was able to get through both of Kivi’s tutorials:

I also got PyInstaller installed, as a first step to making packaged Apps in the future, since that is the method Kivy recommends.

My hope was I would find the same initial joy I found in the Processing API in Kivy.  Truth is, this isn’t the case:  There is a lot more overhead in Kivy to get this working compared to Processing.  That being said, I see promise:  It’s super easy to get pretty things working in Processing, but to take your apps to the ‘next level’ there is a lot of overhead… namely needing to really dig into Java, and fully grok Eclipse.  I have a feeling once I learn it, it should be the reverse in Kivy:  More overhead to get simple things working, but far easier to take them to the ‘next level’.  Time will tell.

Total time:  About 2 hours.