A datacenter decommissioning last year gave me access to some old Netbotz (now APC) monitoring gear. I jumped at the chance to save it from the garbage, as I remembered it being a pretty slick way to get basic environmental data (temp, door switches, etc.) and it was a small physical footprint way to control a number of networked cameras at once.

My recollection was right, and while the management server appliance proved both annoying to work with and massive overkill for my home projects, I’ve had some luck scraping data out of its web UI.

I wrote a few python classes to make it reasonable to programmatically access the data generated by the cameras and sensors in my backyard chicken coop; you can see the end result at pachube.

The code is available here.  It’s driven by a small db schema that holds basic location info for the monitoring gear and metadata about sensors being watched (e.g. polling intervals and alert thresholds), and produces simple name/value pairs with sensor data satisfying given conditions.  By “conditions” here I mean things like “it’s time to report this sensor value” or “this value changed too much, so I’m reporting on it” — this is by no means meant to replace actual monitoring systems (though it could easily be used to interface with them).

While I wrote fairly thorough pydoc, the overall documentation is hardly extensive, so I’d be happy to answer questions if getting at netbotz data from python is interesting to anybody else.

 

Pachube provides an API-exposed store of sensor data from pretty much anything, but with a focus on sensor data.  It’s free to use and has what seems to be a pretty complete, reasonably designed, and easy to use API with a nicely flexible authorization model.

I’ve been working on some code to scrape data out of the Netbotz web UI (which does not have an easy to use API) and, as I got to working on the data storage backend for the scraper, remembered Pachube and decided to give it a try rather than reinventing what was effectively the same wheel.

I have generally positive impressions of Pachube after a couple of days messing around with it.  It was pretty easy to get going, and after stumbling into one quickly-fixed bug was up and sending data in from my prototype scraper pretty quickly.

I’m using the following python class to simplify the interaction.  It’s trivial, but I didn’t see anything similar already done, so am throwing it here for future google results:


#!/usr/bin/python

# This code is released into the public domain (though I'd be interested in seeing
#  improvements or extensions, if you're willing to share back).

import mechanize
import json
import time

class PachubeFeedUpdate:

  _url_base = "http://api.pachube.com/v2/feeds/"
  _feed_id = None
  _version = None
  ## the substance of our update - list of dictionaries with keys 'id' and 'current_value'
  _data = None
  ## the actual object we'll JSONify and send to the API endpoint
  _payload = None
  _opener = None

  def __init__(self, feed_id, apikey):
    self._version = "1.0.0"
    self._feed_id = feed_id
    self._opener = mechanize.build_opener()
    self._opener.addheaders = [('X-PachubeApiKey',apikey)]
    self._data = []
    self._payload = {}

  def addDatapoint(self,dp_id,dp_value):
    self._data.append({'id':dp_id, 'current_value':dp_value})

  def buildUpdate(self):
    self._payload['version'] = self._version
    self._payload['id'] = self._feed_id
    self._payload['datastreams'] = self._data

  def sendUpdate(self):
    url = self._url_base + self._feed_id + "?_method=put"
    try:
      self._opener.open(url,json.dumps(self._payload))
    except mechanize.HTTPError as e:
      print "An HTTP error occurred: %s " % e

Usage is pretty straightforward.  For example, assuming you have defined key (with a valid API key from Pachube) and feed (a few clicks in the browser once you’ve registered) it’s basically like:


pfu = PachubeFeedUpdate(feed,key)
# do some stuff; gather data, repeating as necessary for any number of datastreams
pfu.addDatapoint(<datastream_id>,<data_value>))
# finish up and submit the data
pfu.buildUpdate()
pfu.sendUpdate()

The resulting datapoints basically end up looking like they were logged at the time the sendUpdate() call is made.  In my situation, I want to send readings from a couple of dozen sensors each into their own Pachube datastream in one shot, so this works fine.  If, instead, for some reason you need to accumulate updates over time without posting them, you’d need to take a different approach.

 

I set out this weekend to get an Arduino board to control my Roomba.  (The Roomba has a great – and generally open – interface, and iRobot deserves significant credit for encouraging creative repurposing/extensions of their products.)  I’ve got a few project ideas in mind, but for an initial step just wanted to verify that the Arduino could a) send control commands (“move forward”, “turn right”, etc.) from the Arduino, and b) read sensor data (“something is touching my left bumper”, “I’m about to fall down the stairs”).  This post contains my notes, which hopefully will help others doing this sort through some of the issues in a bit less time than that I spent.  Continue reading »

 

I’ve been playing with Arduino boards in my limited spare time over the past few months.  It’s a fun way to spend quality hands-on geek time that is clearly distinct (at least to me) from my day job.  Plus, I’m able to start actually instantiating some of the ubiquitious computing / distributed sensor ideas that have been floating around in my head.

I’ve been working on a simple wireless light, temp, and motion sensor.  Light was a trivial CDS photocell connected to the analog port of the arduino.  My first attempt at temp is using the Dallas Semiconductor DS-18B20 digital one-wire sensor, which is pretty slick for $4.25.

There was some good sample code on the main arduino site, but I spent a small bit of time to flesh it out more completely, adding the ability to configure sensor resolution and extracting the temp value from the returned data.  Code is here, if this is interesting or useful to you.

 

Thanks to Amy and JB‘s motivational dissing of our old (and oft-broken) mythtv installation, I set out last weekend to rebuild the setup, which involves a backend system in the basement and a frontend in the TV room driven by an xbox that can retrieve recorded video from the myth system downstairs.

I had a very easy time getting current myth (0.20) installed on a SuSE 10.2 box with a Hauppauge PVR-350 last weekend, and as I’ve come to expect in 4+ years of myth use, 0.20 is noticeably better than 0.18. Since the xbox scripts that interface with myth are version-specific, I needed to update them too. and this was enough motivation for me to go ahead and get a modern XBMC install.

It’s all working now, and it seems pretty cool.

I’ll spare you the particulars, but there were 2 non-obvious problems I encountered along the way, the corresponding solutions to which I thought I’d leave here where google can find them:

problem #1: database connection errors from the xbmcmythtv script. This was puzzling as I’d verified that the host-level packet filters on the server running mysqld were allowing traffic, I was seeing successful TCP connections, and I had verified from a different machine on the network could connect to the target db using the username+password I had configured xmbcmythtv to use. Since the xbox has relatively few other diagnostic capabilities, I used ethereal wireshark to watch more closely, and found a mysql auth error being sent back by the server that read:

Client does not support authentication protocol requested by server; consider upgrading MySQL client

solution #1: use pre-mysql-4.1 style password encoding on the server. With the specific error string (unhelpfully obscured by the xbmc script), google quickly found this note in the MySQL reference manual.

problem #2: script says “caching subtitles” when I try to play a recorded show, then appears to hang for a while before returning to the program listing. This one was quite a head-scratcher for a while, since I wasn’t trying to do anything with subtitles, I couldn’t find any caching options that seemed related, and there was no other indication of something that might be failing (permissions, etc.). What’s more, this problem was coming up after I was successfully getting the list of recorded programs, which meant the xbox was successfully talking to the backend server (mysql, smb, and the myth backend process are all on the same box).

I found some threads on various xbox fora that described very similar problems, but none with solutions that were even potentially relevant.

solution #2: use IPs in the xbmcmythtv config. I had been using the FQDN of my backend box in both the db and general paths config screens of xbmcmythtv. The xbox is configured to use an internal DNS server that is authoritative for the domain in question, and to remove all doubt that DNS was actually working, the DB connection and connection to the mythtv backend worked fine (as evidenced by my ability to retrieve the program listing).

While desperately searching for clues on the “caching subtitles” problem, I found a mention in some random “common problems” document that emphasized that unqualified hostnames (i.e. missing the full domain) in the xmbcmythtv config would not work. I was already using FQDNs, but on a lark I tried replacing the FQDN with the IP of the backend server in the SMB path part of the config, and sure enough, that did the trick.

Apologies for what was certainly a very boring post for, well, anybody who came here except via a search for one of the aforementioned problems. For those of you who did get here looking for answers, I hope this helped.

 

It’s bike to work week here in Madison, which makes linking to this guy who mounted a camera and a bit o’ electronics to take a photo every 10 seconds as he bikes across NYC apropos:

(You get the point after a few seconds of the video, but it’s a neat project.)

 

The worm below is participating (unwillingly, perhaps) in the creation of some pretty cool aleatoric music.  This reminds me of the software-based aleatoric project Doug and I did more than a decade ago (entitled, for some reason, "Just be Limp and Let me Abuse You").  I think I actually just encountered a floppy disk containing a backup of the code a couple of weeks ago, actually.  Doug:  do you remember what the input for JBLALMAY was?

[Article. Via STREETtech.]

 

As part of my aforementioned switch from the Mac to a Windows box as my primary envrionment, I abandoned my long-time Mac-only RSS aggegator for google reader.  So far I’ve been pretty pleased — it’s easily usable, and keeps up with the excessive quantity of feeds I aggregate, too.  Of course, having yet another basic app be network-based is also a plus.

A friend at work recently pointed me to the shared items feature of google reader, which creates a meta-feed of sorts consisting of the items I flagged as interesting from the various feeds I aggregate.  You can see them here, or grab a feed.  I just started using this, but could see it filling a potentially useful role for very lightweight interest tracking; for example, for things I think are neat but not worth tagging in del.icio.us, much less blogging.

 

IANAL, but I found the Wisconsin Court of Appeals’ recent refusal to admit evidence of a tape recording of a school bus driver verbally and physically abusing a child on his bus interesting, and a bit puzzling.  The issue the court considered pertains to the recording, of course, not the abuse.  In Wisconsin it’s OK to record conversations to which one is party, but apparently it’s not OK to disclose those recordings, and that was the sticking point for the 2-1 majority.

Policy-wise, we have a long way to go toward a workable future of surveillance and recording technologies.  I’m conflicted on this one — it seems reasonable to be able to record things one is directly perceiving.  At the same time, it seems absurd to suggest that there is some ever-present bubble of confidentiality (i.e. that one would be limited in one’s own use of such recordings).  On the other hand, though, the notion of everyone recording and sharing everything always is troubling.

 

Two items have caught my eye this week in the ceaselessly crashing waves of information and distraction also known as "being online".

Item the firsta new island has been identified off the coast of Greenland.  A new island?!?  Due, predictably to … (wait for it) … global warming.  Or "global climate change".  Or "the global struggle against our continued existence", or whatever we’re calling it now.  As it’s not possible to turn around recently without more articulation of the interesting times we live in with regard to climate, I’ll leave it at that:  new island.

Item the second: a bi-directional brain<->computer interface intended to (someday) serve as artificial memory. This is research in progress at USC, but has already achieved the ability to simulate 12,000 neurons and interact with real brain cells.  You need to read through the popular science hype like "reducing memory loss to nothing more than a computer glitch", but there’s some cool potential there.

I found the difference in my reaction to these stories interesting, as well.  With regard to the first, my reaction is along the lines of "oh, #$!^%", and "we’re totally screwed".  While I’m mostly not what I would call an emotional environmentalist, from a pragmatic point of view I’ve long thought our modern growth-driven world fails to understand and appropriately respect the complexity and interdependence of environmental systems.  Our inputs to the world’s systems (internal combustion, population growth, agriculture, etc.) have impacts that are both unpredictable and leveraged, and then before we know it, we’re staring at melting ice caps and waiting for submarine Miami.

Brain-computer interfaces, on the other hand, excite me a great deal.  With all due respect to those who differ in this regard, I’m not personally persuaded by literal interpretations of various mythologies of the origin or purpose of life.  I don’t think there is anything categorically off-limits about us working to modify our brains, or to use technology to enhance and extend our cognitive capabilities.

In fact, I’d argue strongly that we already have and depend on such enhancements and extensions (I just need this very low-bandwidth keyboard/screen interface and a few D/A converters to access my supplemental memories).  I think it’s theoretically difficult to identify a big bright line that separates the current state of our technological enhancement to human thought from a more capable, faster, wetter one we may have in the form of mid-21st century brain implants.

Of particular interest, I thought, was the mechanical focus of the USC researchers.  With regard to the not-yet-understood aspects of human cognition and consciousness and the impact such unknowns might have on his work, the lead researcher remarks: "A repairman doesn’t need to understand music to fix your broken CD player".

I like that approach.  It makes sense, and it’s an interesting avenue of approach to take towards a subject of such massive complexity as the brain, in that it doesn’t depend on an understanding of internal semantics, but rather just the observable mechanics.

On the other hand, part of me does wonder what the melting-ice-cap of brain implants we might be fretting about in the early 22nd century.

© 2011 Joshua Heling Suffusion theme by Sayontan Sinha