I recently encountered a surprising problem while setting up an AWS environment: pylint fails to run in a python 3.6 virtualenv on Amazon Linux 2018.3!

(pylint-test) [root@myhost ~]# pylint
Traceback (most recent call last):
File "/root/pylint-test/bin/pylint", line 11, in
File "/root/pylint-test/local/lib/python3.6/dist-packages/pylint/__init__.py", line 15, in run_pylint
from pylint.lint import Run
File "/root/pylint-test/local/lib/python3.6/dist-packages/pylint/lint.py", line 64, in
import astroid
File "/root/pylint-test/local/lib/python3.6/dist-packages/astroid/__init__.py", line 54, in
from astroid.exceptions import *
File "/root/pylint-test/local/lib/python3.6/dist-packages/astroid/exceptions.py", line 11, in
from astroid import util
File "/root/pylint-test/local/lib/python3.6/dist-packages/astroid/util.py", line 11, in
import lazy_object_proxy
ModuleNotFoundError: No module named 'lazy_object_proxy'
(pylint-test) [root@myhost ~]#

This post is about hunting down the problem, finding a workaround, and identifying next steps toward a root cause fix.

Initial Troubleshooting

A glance at the stack trace suggests this is simply a matter of a missing module. Basically, that the pylint dependency chain wasn’t correctly installed.

This seemed unlikely, though – I’d simply used pip to install pylint in an uncomplicated virtualenv, and pip had successfully identified and installed the needed dependencies. If this was the problem, then there was a more fundamental problem with either pip or one of the modules it was installing.

I went through my install/setup steps again, just to make sure this wasn’t the result of simple user error.

  • Set up a fresh virtualenv with the desired python version:
    [root@myhost ~]# rpm -qf `which python3`
    [root@myhost ~]# virtualenv-3.6 -p /usr/bin/python3 pylint-test
    Running virtualenv with interpreter /usr/bin/python3
    Using base prefix '/usr'
    New python executable in /root/pylint-test/bin/python3
    Also creating executable in /root/pylint-test/bin/python
    Installing setuptools, pip, wheel...done.
  • Activate the virtualenv:
    [root@myhost ~]# source pylint-test/bin/activate
  • Check that the expected pip is being used in the virtualenv:
    (pylint-test) [root@myhost ~]# pip --version
    pip 10.0.1 from /root/pylint-test/local/lib/python3.6/dist-packages/pip (python 3.6)
  • Install pylint via pip (ignoring anything pip has cached, just in case):
    (pylint-test) [root@myhost ~]# pip --no-cache-dir install pylint Collecting pylint Downloading https://files.pythonhosted.org/packages/f2/95/0ca03c818ba3cd14f2dd4e95df5b7fa232424b7fc6ea1748d27f293bc007/pylint-1.9.2-py2.py3-none-any.whl (690kB) 100% || 696kB 7.7MB/s Collecting isort>=4.2.5 (from pylint)
    Downloading https://files.pythonhosted.org/packages/1f/2c/22eee714d7199ae0464beda6ad5fedec8fee6a2f7ffd1e8f1840928fe318/isort-4.3.4-py3-none-any.whl (45kB)
    100% || 51kB 9.7MB/s
    Collecting astroid<2.0,>=1.6 (from pylint)
    Downloading https://files.pythonhosted.org/packages/0e/9b/18b08991c8c6aaa827faf394f4468b8fee41db1f73aa5157f9f5fb2e69c3/astroid-1.6.5-py2.py3-none-any.whl (293kB)
    100% || 296kB 8.1MB/s
    Collecting mccabe (from pylint)
    Downloading https://files.pythonhosted.org/packages/87/89/479dc97e18549e21354893e4ee4ef36db1d237534982482c3681ee6e7b57/mccabe-0.6.1-py2.py3-none-any.whl
    Collecting six (from pylint)
    Downloading https://files.pythonhosted.org/packages/67/4b/141a581104b1f6397bfa78ac9d43d8ad29a7ca43ea90a2d863fe3056e86a/six-1.11.0-py2.py3-none-any.whl
    Collecting lazy-object-proxy (from astroid<2.0,>=1.6->pylint)
    Downloading https://files.pythonhosted.org/packages/65/1f/2043ec33066e779905ed7e6580384425fdc7dc2ac64d6931060c75b0c5a3/lazy_object_proxy-1.3.1-cp36-cp36m-manylinux1_x86_64.whl (55kB)
    100% || 61kB 16.9MB/s
    Collecting wrapt (from astroid<2.0,>=1.6->pylint)
    Downloading https://files.pythonhosted.org/packages/a0/47/66897906448185fcb77fc3c2b1bc20ed0ecca81a0f2f88eda3fc5a34fc3d/wrapt-1.10.11.tar.gz
    Installing collected packages: isort, lazy-object-proxy, wrapt, six, astroid, mccabe, pylint
    Running setup.py install for wrapt ... done
    Successfully installed astroid-1.6.5 isort-4.3.4 lazy-object-proxy-1.3.1 mccabe-0.6.1 pylint-1.9.2 six-1.11.0 wrapt-1.10.11

That all seems as expected. But still, no joy getting pylint to run:

(pylint-test) [root@myhost ~]# pylint
Traceback (most recent call last):
File "/root/pylint-test/bin/pylint", line 11, in
File "/root/pylint-test/local/lib/python3.6/dist-packages/pylint/__init__.py", line 15, in run_pylint
from pylint.lint import Run
File "/root/pylint-test/local/lib/python3.6/dist-packages/pylint/lint.py", line 64, in
import astroid
File "/root/pylint-test/local/lib/python3.6/dist-packages/astroid/__init__.py", line 54, in
from astroid.exceptions import *
File "/root/pylint-test/local/lib/python3.6/dist-packages/astroid/exceptions.py", line 11, in
from astroid import util
File "/root/pylint-test/local/lib/python3.6/dist-packages/astroid/util.py", line 11, in
import lazy_object_proxy
ModuleNotFoundError: No module named 'lazy_object_proxy'
(pylint-test) [root@myhost ~]#

Isolating the Potential Problem Space

With basic user error ruled out, I set out to narrow down the scope of the problem. It seems extremely unlikely that pylint or its package are just broken – it’s too commonly used a tool for that. For my own assurance, though, I checked a few other similar environments I had handy and confirmed that:

  • python 3.6.5 and pylint worked on my mac
  • setting up a similar virtualenv on a RHEL 7.5 EC2 instance also worked (“worked” meaning pylint ran as expected)
  • pylint in  a python 3.6 virtualenv on my Qubes (linux) desktop also worked

OK, so this is something specific to this particular environment. The OS distro (Amazon Linux 2018.03) was the most obvious difference between the non-working and working environments. But by itself that’s not much of a clue – the OS should have almost nothing to do with the behavior of pip or a module inside a virtualenv.

Since the apparent issue was with the lazy_object_proxy module, I thought I’d try to install just that module and see what happened:

(pylint-test) [root@myhost ~]# pip --no-cache-dir install lazy-object-proxy
Collecting lazy-object-proxy
Downloading https://files.pythonhosted.org/packages/65/1f/2043ec33066e779905ed7e6580384425fdc7dc2ac64d6931060c75b0c5a3/lazy_object_proxy-1.3.1-cp36-cp36m-manylinux1_x86_64.whl (55kB)
100% |████████████████████████████████| 61kB 2.6MB/s
Installing collected packages: lazy-object-proxy
Successfully installed lazy-object-proxy-1.3.1
(pylint-test) [root@myhost ~]# echo $?
(pylint-test) [root@myhost ~]# pip list | grep lazy
(pylint-test) [root@myhost ~]#

Huh? It says it installs successfully, but immediately thereafter isn’t on the list of installed modules? Fishy. It seems like understanding what pip is doing with this module is the appropriate next step troubleshooting.

Digging in to pip and lazy_object_proxy

So if pip thinks it’s installing lazy_object_proxy, what’s it doing on the filesystem when it does so? Let’s look:

[root@myhost pylint-test]# find . -name lazy\*

Hrm. So it is installing the module. How does this compare to the equivalent virtualenv on the RHEL 7.5 system where pylint works?

(pylint-test) [root@otherhost pylint-test]# find . -name lazy\*

Aha – there’s a difference! On the (working) RHEL-7.5 system, lazy_object_proxy ends up in lib/python3.6/site-packages/, and on the Amazon Linux system with the non-functional pylint it’s in lib64/python3.6/dist-packages/. For reference, this blog post by Lee Mendelowitz does a nice job explaining some fundamentals of package loading, and of site-packages vs. dist-packages in general.

Just because we’ve found a difference doesn’t mean it’s a meaningful difference. In fact, it looks like all of the modules installed on Amazon Linux are in dist-packages rather than site-packages. (I find this surprising, since my understanding is that Amazon Linux is more or less RedHat derived, but that’s a different topic.)

And indeed, packages in dist-packages generally work fine on the Amazon Linux system. For example, the ‘six’ module:

(pylint-test) [root@myhost pylint-test]# pip list | grep six
six 1.11.0
(pylint-test) [root@myhost pylint-test]# python -i
Python 3.6.5 (default, Apr 26 2018, 00:14:31)
[GCC 4.8.5 20150623 (Red Hat 4.8.5-11)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import importlib.util
>>> importlib.util.find_spec('six')
ModuleSpec(name='six', loader=<_frozen_importlib_external.SourceFileLoader object at 0x7f1dbed43c88>, origin='/root/pylint-test/local/lib/python3.6/dist-packages/six.py')
>>> import six

This makes sense, because pylint-test/local/lib/python3.6/dist-packages is in the pythonpath (pylint-test/local/lib is a symlink to pylint-test/lib/):

>>> import sys
>>> print('\n'.join(sys.path))

Problem Identified; Workaround

The problem with the lazy_object_proxy module is clear at this point: the module is being installed to a directory not in the module search path (sys.path). For some reason, pip installs it to ./lib64/python3.6/dist-packages/lazy_object_proxy, but neither lib64 nor local/lib64 (which symlinks to the former) is in the module path.

This suggests an easy workaround – manually adding the appropriate directory to the pythonpath should make the module loadable:

(pylint-test) [root@ip-172-31-44-121 pylint-test]# python -i
Python 3.6.5 (default, Apr 26 2018, 00:14:31)
[GCC 4.8.5 20150623 (Red Hat 4.8.5-11)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import lazy_object_proxy
Traceback (most recent call last):
File "", line 1, in
ModuleNotFoundError: No module named 'lazy_object_proxy'
(pylint-test) [root@ip-172-31-44-121 pylint-test]# PYTHONPATH="./lib64/python3.6/dist-packages/" python -i
Python 3.6.5 (default, Apr 26 2018, 00:14:31)
[GCC 4.8.5 20150623 (Red Hat 4.8.5-11)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import lazy_object_proxy

Speculation Re: Root Cause & Real Solution

While manually modifying the environment (PYTHONPATH) might be an effective workaround, it’s not really a solution. As I see it, a key question remains: why is pip installing a package outside of the module search path? Or perhaps pip is doing the right thing, and the question is why isn’t lib64/python3.6/dist-packages in the default module search path?

Is this a pip bug? An error in Amazon Linux’s python packaging? Something subtly wrong in the lazy_object_proxy packaging?

The pip docs don’t seem to address this question, or for that matter explain what determines the path a given module will end up in. They do suggest other possible workarounds (i.e. manually configuring pip install destinations), but those don’t seem more satisfying to me than the PYTHONPATH workaround, and don’t shed light on the remaining mystery.

I have a gut suspicion that the core of this issue is likely related to something specific to Amazon Linux – perhaps the logic in site.py that sets the default module search path. This is very much just a hunch, but the use of dist-packages (rather than site-packages) seems odd. It’s also strange that the out-of-the-box sys.path in Amazon Linux includes /root/pylint-test/local/lib/python3.6/dist-packages but not its lib64 equivalent. I started a thread on the AWS forum to see if anybody there can has a relevant insight.


A datacenter decommissioning last year gave me access to some old Netbotz (now APC) monitoring gear. I jumped at the chance to save it from the garbage, as I remembered it being a pretty slick way to get basic environmental data (temp, door switches, etc.) and it was a small physical footprint way to control a number of networked cameras at once.

My recollection was right, and while the management server appliance proved both annoying to work with and massive overkill for my home projects, I’ve had some luck scraping data out of its web UI.

I wrote a few python classes to make it reasonable to programmatically access the data generated by the cameras and sensors in my backyard chicken coop; you can see the end result at pachube.

The code is available here.  It’s driven by a small db schema that holds basic location info for the monitoring gear and metadata about sensors being watched (e.g. polling intervals and alert thresholds), and produces simple name/value pairs with sensor data satisfying given conditions.  By “conditions” here I mean things like “it’s time to report this sensor value” or “this value changed too much, so I’m reporting on it” — this is by no means meant to replace actual monitoring systems (though it could easily be used to interface with them).

While I wrote fairly thorough pydoc, the overall documentation is hardly extensive, so I’d be happy to answer questions if getting at netbotz data from python is interesting to anybody else.


Pachube provides an API-exposed store of sensor data from pretty much anything, but with a focus on sensor data.  It’s free to use and has what seems to be a pretty complete, reasonably designed, and easy to use API with a nicely flexible authorization model.

I’ve been working on some code to scrape data out of the Netbotz web UI (which does not have an easy to use API) and, as I got to working on the data storage backend for the scraper, remembered Pachube and decided to give it a try rather than reinventing what was effectively the same wheel.

I have generally positive impressions of Pachube after a couple of days messing around with it.  It was pretty easy to get going, and after stumbling into one quickly-fixed bug was up and sending data in from my prototype scraper pretty quickly.

I’m using the following python class to simplify the interaction.  It’s trivial, but I didn’t see anything similar already done, so am throwing it here for future google results:


# This code is released into the public domain (though I'd be interested in seeing
#  improvements or extensions, if you're willing to share back).

import mechanize
import json
import time

class PachubeFeedUpdate:

  _url_base = "http://api.pachube.com/v2/feeds/"
  _feed_id = None
  _version = None
  ## the substance of our update - list of dictionaries with keys 'id' and 'current_value'
  _data = None
  ## the actual object we'll JSONify and send to the API endpoint
  _payload = None
  _opener = None

  def __init__(self, feed_id, apikey):
    self._version = "1.0.0"
    self._feed_id = feed_id
    self._opener = mechanize.build_opener()
    self._opener.addheaders = [('X-PachubeApiKey',apikey)]
    self._data = []
    self._payload = {}

  def addDatapoint(self,dp_id,dp_value):
    self._data.append({'id':dp_id, 'current_value':dp_value})

  def buildUpdate(self):
    self._payload['version'] = self._version
    self._payload['id'] = self._feed_id
    self._payload['datastreams'] = self._data

  def sendUpdate(self):
    url = self._url_base + self._feed_id + "?_method=put"
    except mechanize.HTTPError as e:
      print "An HTTP error occurred: %s " % e

Usage is pretty straightforward.  For example, assuming you have defined key (with a valid API key from Pachube) and feed (a few clicks in the browser once you’ve registered) it’s basically like:

pfu = PachubeFeedUpdate(feed,key)
# do some stuff; gather data, repeating as necessary for any number of datastreams
# finish up and submit the data

The resulting datapoints basically end up looking like they were logged at the time the sendUpdate() call is made.  In my situation, I want to send readings from a couple of dozen sensors each into their own Pachube datastream in one shot, so this works fine.  If, instead, for some reason you need to accumulate updates over time without posting them, you’d need to take a different approach.


I set out this weekend to get an Arduino board to control my Roomba.  (The Roomba has a great – and generally open – interface, and iRobot deserves significant credit for encouraging creative repurposing/extensions of their products.)  I’ve got a few project ideas in mind, but for an initial step just wanted to verify that the Arduino could a) send control commands (“move forward”, “turn right”, etc.) from the Arduino, and b) read sensor data (“something is touching my left bumper”, “I’m about to fall down the stairs”).  This post contains my notes, which hopefully will help others doing this sort through some of the issues in a bit less time than that I spent.  Continue reading »


I’ve been playing with Arduino boards in my limited spare time over the past few months.  It’s a fun way to spend quality hands-on geek time that is clearly distinct (at least to me) from my day job.  Plus, I’m able to start actually instantiating some of the ubiquitious computing / distributed sensor ideas that have been floating around in my head.

I’ve been working on a simple wireless light, temp, and motion sensor.  Light was a trivial CDS photocell connected to the analog port of the arduino.  My first attempt at temp is using the Dallas Semiconductor DS-18B20 digital one-wire sensor, which is pretty slick for $4.25.

There was some good sample code on the main arduino site, but I spent a small bit of time to flesh it out more completely, adding the ability to configure sensor resolution and extracting the temp value from the returned data.  Code is here, if this is interesting or useful to you.


Thanks to Amy and JB‘s motivational dissing of our old (and oft-broken) mythtv installation, I set out last weekend to rebuild the setup, which involves a backend system in the basement and a frontend in the TV room driven by an xbox that can retrieve recorded video from the myth system downstairs.

I had a very easy time getting current myth (0.20) installed on a SuSE 10.2 box with a Hauppauge PVR-350 last weekend, and as I’ve come to expect in 4+ years of myth use, 0.20 is noticeably better than 0.18. Since the xbox scripts that interface with myth are version-specific, I needed to update them too. and this was enough motivation for me to go ahead and get a modern XBMC install.

It’s all working now, and it seems pretty cool.

I’ll spare you the particulars, but there were 2 non-obvious problems I encountered along the way, the corresponding solutions to which I thought I’d leave here where google can find them:

problem #1: database connection errors from the xbmcmythtv script. This was puzzling as I’d verified that the host-level packet filters on the server running mysqld were allowing traffic, I was seeing successful TCP connections, and I had verified from a different machine on the network could connect to the target db using the username+password I had configured xmbcmythtv to use. Since the xbox has relatively few other diagnostic capabilities, I used ethereal wireshark to watch more closely, and found a mysql auth error being sent back by the server that read:

Client does not support authentication protocol requested by server; consider upgrading MySQL client

solution #1: use pre-mysql-4.1 style password encoding on the server. With the specific error string (unhelpfully obscured by the xbmc script), google quickly found this note in the MySQL reference manual.

problem #2: script says “caching subtitles” when I try to play a recorded show, then appears to hang for a while before returning to the program listing. This one was quite a head-scratcher for a while, since I wasn’t trying to do anything with subtitles, I couldn’t find any caching options that seemed related, and there was no other indication of something that might be failing (permissions, etc.). What’s more, this problem was coming up after I was successfully getting the list of recorded programs, which meant the xbox was successfully talking to the backend server (mysql, smb, and the myth backend process are all on the same box).

I found some threads on various xbox fora that described very similar problems, but none with solutions that were even potentially relevant.

solution #2: use IPs in the xbmcmythtv config. I had been using the FQDN of my backend box in both the db and general paths config screens of xbmcmythtv. The xbox is configured to use an internal DNS server that is authoritative for the domain in question, and to remove all doubt that DNS was actually working, the DB connection and connection to the mythtv backend worked fine (as evidenced by my ability to retrieve the program listing).

While desperately searching for clues on the “caching subtitles” problem, I found a mention in some random “common problems” document that emphasized that unqualified hostnames (i.e. missing the full domain) in the xmbcmythtv config would not work. I was already using FQDNs, but on a lark I tried replacing the FQDN with the IP of the backend server in the SMB path part of the config, and sure enough, that did the trick.

Apologies for what was certainly a very boring post for, well, anybody who came here except via a search for one of the aforementioned problems. For those of you who did get here looking for answers, I hope this helped.


It’s bike to work week here in Madison, which makes linking to this guy who mounted a camera and a bit o’ electronics to take a photo every 10 seconds as he bikes across NYC apropos:

(You get the point after a few seconds of the video, but it’s a neat project.)


The worm below is participating (unwillingly, perhaps) in the creation of some pretty cool aleatoric music.  This reminds me of the software-based aleatoric project Doug and I did more than a decade ago (entitled, for some reason, "Just be Limp and Let me Abuse You").  I think I actually just encountered a floppy disk containing a backup of the code a couple of weeks ago, actually.  Doug:  do you remember what the input for JBLALMAY was?

[Article. Via STREETtech.]


As part of my aforementioned switch from the Mac to a Windows box as my primary envrionment, I abandoned my long-time Mac-only RSS aggegator for google reader.  So far I’ve been pretty pleased — it’s easily usable, and keeps up with the excessive quantity of feeds I aggregate, too.  Of course, having yet another basic app be network-based is also a plus.

A friend at work recently pointed me to the shared items feature of google reader, which creates a meta-feed of sorts consisting of the items I flagged as interesting from the various feeds I aggregate.  You can see them here, or grab a feed.  I just started using this, but could see it filling a potentially useful role for very lightweight interest tracking; for example, for things I think are neat but not worth tagging in del.icio.us, much less blogging.


IANAL, but I found the Wisconsin Court of Appeals’ recent refusal to admit evidence of a tape recording of a school bus driver verbally and physically abusing a child on his bus interesting, and a bit puzzling.  The issue the court considered pertains to the recording, of course, not the abuse.  In Wisconsin it’s OK to record conversations to which one is party, but apparently it’s not OK to disclose those recordings, and that was the sticking point for the 2-1 majority.

Policy-wise, we have a long way to go toward a workable future of surveillance and recording technologies.  I’m conflicted on this one — it seems reasonable to be able to record things one is directly perceiving.  At the same time, it seems absurd to suggest that there is some ever-present bubble of confidentiality (i.e. that one would be limited in one’s own use of such recordings).  On the other hand, though, the notion of everyone recording and sharing everything always is troubling.

© 2021 layer8 Suffusion theme by Sayontan Sinha