Dear blog owner and visitors,

This blog had been infected to serve up Gootloader malware to Google search victims, via a common tactic known as SEO (Search Engine Optimization) poisioning. Your blog was serving up 390 malicious pages. Your blogged served up malware to 0 visitors.

I tried my best to clean up the infection, but I would do the following:

  • Upgrade WordPress to the latest version (one way the attackers might have gained access to your server)
  • Upgrade all WordPress themes to the latest versions (another way the attackers might have gained access to your server)
  • Upgrade all WordPress plugins (another way the attackers might have gained access to your server), and remove any unnecessary plugins.
  • Verify all users are valid (in case the attackers left a backup account, to get back in)
  • Change all passwords (for WordPress accounts, FTP, SSH, database, etc.) and keys. This is probably how the attackers got in, as they are known to brute force weak passwords
  • Run antivirus scans on your server
  • Block these IPs (5.8.18.7 and 89.238.176.151), either in your firewall, .htaccess file, or in your /etc/hosts file, as these are the attackers command and control servers, which send malicious commands for your blog to execute
  • Check cronjobs (both server and WordPress), aka scheduled tasks. This is a common method that an attacker will use to get back in. If you are not sure, what this is, Google it
  • Consider wiping the server completly, as you do not know how deep the infection is. If you decide not to, I recommend installing some security plugins for WordPress, to try and scan for any remaining malicious files. Integrity Checker, WordPress Core Integrity Checker, Sucuri Security,
    and Wordfence Security, all do some level of detection, but not 100% guaranteed
  • Go through the process for Google to recrawl your site, to remove the malcious links (to see what malicious pages there were, Go to Google and search site:your_site.com agreement)
  • Check subdomains, to see if they were infected as well
  • Check file permissions

Gootloader (previously Gootkit) malware has been around since 2014, and is used to initally infect a system, and then sell that access off to other attackers, who then usually deploy additional malware, to include ransomware and banking trojans. By cleaning up your blog, it will make a dent in how they infect victims. PLEASE try to keep it up-to-date and secure, so this does not happen again.

Sincerly,

The Internet Janitor

Below are some links to research/further explaination on Gootloader:

https://news.sophos.com/en-us/2021/03/01/gootloader-expands-its-payload-delivery-options/

https://news.sophos.com/en-us/2021/08/12/gootloaders-mothership-controls-malicious-content/

https://www.richinfante.com/2020/04/12/reverse-engineering-dolly-wordpress-malware

https://blog.sucuri.net/2018/12/clever-seo-spam-injection.html

This message

 

I recently encountered a surprising problem while setting up an AWS environment: pylint fails to run in a python 3.6 virtualenv on Amazon Linux 2018.3!

(pylint-test) [root@myhost ~]# pylint
Traceback (most recent call last):
File "/root/pylint-test/bin/pylint", line 11, in
sys.exit(run_pylint())
File "/root/pylint-test/local/lib/python3.6/dist-packages/pylint/__init__.py", line 15, in run_pylint
from pylint.lint import Run
File "/root/pylint-test/local/lib/python3.6/dist-packages/pylint/lint.py", line 64, in
import astroid
File "/root/pylint-test/local/lib/python3.6/dist-packages/astroid/__init__.py", line 54, in
from astroid.exceptions import *
File "/root/pylint-test/local/lib/python3.6/dist-packages/astroid/exceptions.py", line 11, in
from astroid import util
File "/root/pylint-test/local/lib/python3.6/dist-packages/astroid/util.py", line 11, in
import lazy_object_proxy
ModuleNotFoundError: No module named 'lazy_object_proxy'
(pylint-test) [root@myhost ~]#

This post is about hunting down the problem, finding a workaround, and identifying next steps toward a root cause fix.

Initial Troubleshooting

A glance at the stack trace suggests this is simply a matter of a missing module. Basically, that the pylint dependency chain wasn’t correctly installed.

This seemed unlikely, though – I’d simply used pip to install pylint in an uncomplicated virtualenv, and pip had successfully identified and installed the needed dependencies. If this was the problem, then there was a more fundamental problem with either pip or one of the modules it was installing.

I went through my install/setup steps again, just to make sure this wasn’t the result of simple user error.

  • Set up a fresh virtualenv with the desired python version:
    [root@myhost ~]# rpm -qf `which python3`
    python36-3.6.5-1.9.amzn1.x86_64
    [root@myhost ~]# virtualenv-3.6 -p /usr/bin/python3 pylint-test
    Running virtualenv with interpreter /usr/bin/python3
    Using base prefix '/usr'
    New python executable in /root/pylint-test/bin/python3
    Also creating executable in /root/pylint-test/bin/python
    Installing setuptools, pip, wheel...done.
  • Activate the virtualenv:
    [root@myhost ~]# source pylint-test/bin/activate
  • Check that the expected pip is being used in the virtualenv:
    (pylint-test) [root@myhost ~]# pip --version
    pip 10.0.1 from /root/pylint-test/local/lib/python3.6/dist-packages/pip (python 3.6)
  • Install pylint via pip (ignoring anything pip has cached, just in case):
    (pylint-test) [root@myhost ~]# pip --no-cache-dir install pylint Collecting pylint Downloading https://files.pythonhosted.org/packages/f2/95/0ca03c818ba3cd14f2dd4e95df5b7fa232424b7fc6ea1748d27f293bc007/pylint-1.9.2-py2.py3-none-any.whl (690kB) 100% || 696kB 7.7MB/s Collecting isort>=4.2.5 (from pylint)
    Downloading https://files.pythonhosted.org/packages/1f/2c/22eee714d7199ae0464beda6ad5fedec8fee6a2f7ffd1e8f1840928fe318/isort-4.3.4-py3-none-any.whl (45kB)
    100% || 51kB 9.7MB/s
    Collecting astroid<2.0,>=1.6 (from pylint)
    Downloading https://files.pythonhosted.org/packages/0e/9b/18b08991c8c6aaa827faf394f4468b8fee41db1f73aa5157f9f5fb2e69c3/astroid-1.6.5-py2.py3-none-any.whl (293kB)
    100% || 296kB 8.1MB/s
    Collecting mccabe (from pylint)
    Downloading https://files.pythonhosted.org/packages/87/89/479dc97e18549e21354893e4ee4ef36db1d237534982482c3681ee6e7b57/mccabe-0.6.1-py2.py3-none-any.whl
    Collecting six (from pylint)
    Downloading https://files.pythonhosted.org/packages/67/4b/141a581104b1f6397bfa78ac9d43d8ad29a7ca43ea90a2d863fe3056e86a/six-1.11.0-py2.py3-none-any.whl
    Collecting lazy-object-proxy (from astroid<2.0,>=1.6->pylint)
    Downloading https://files.pythonhosted.org/packages/65/1f/2043ec33066e779905ed7e6580384425fdc7dc2ac64d6931060c75b0c5a3/lazy_object_proxy-1.3.1-cp36-cp36m-manylinux1_x86_64.whl (55kB)
    100% || 61kB 16.9MB/s
    Collecting wrapt (from astroid<2.0,>=1.6->pylint)
    Downloading https://files.pythonhosted.org/packages/a0/47/66897906448185fcb77fc3c2b1bc20ed0ecca81a0f2f88eda3fc5a34fc3d/wrapt-1.10.11.tar.gz
    Installing collected packages: isort, lazy-object-proxy, wrapt, six, astroid, mccabe, pylint
    Running setup.py install for wrapt ... done
    Successfully installed astroid-1.6.5 isort-4.3.4 lazy-object-proxy-1.3.1 mccabe-0.6.1 pylint-1.9.2 six-1.11.0 wrapt-1.10.11

That all seems as expected. But still, no joy getting pylint to run:

(pylint-test) [root@myhost ~]# pylint
Traceback (most recent call last):
File "/root/pylint-test/bin/pylint", line 11, in
sys.exit(run_pylint())
File "/root/pylint-test/local/lib/python3.6/dist-packages/pylint/__init__.py", line 15, in run_pylint
from pylint.lint import Run
File "/root/pylint-test/local/lib/python3.6/dist-packages/pylint/lint.py", line 64, in
import astroid
File "/root/pylint-test/local/lib/python3.6/dist-packages/astroid/__init__.py", line 54, in
from astroid.exceptions import *
File "/root/pylint-test/local/lib/python3.6/dist-packages/astroid/exceptions.py", line 11, in
from astroid import util
File "/root/pylint-test/local/lib/python3.6/dist-packages/astroid/util.py", line 11, in
import lazy_object_proxy
ModuleNotFoundError: No module named 'lazy_object_proxy'
(pylint-test) [root@myhost ~]#

Isolating the Potential Problem Space

With basic user error ruled out, I set out to narrow down the scope of the problem. It seems extremely unlikely that pylint or its package are just broken – it’s too commonly used a tool for that. For my own assurance, though, I checked a few other similar environments I had handy and confirmed that:

  • python 3.6.5 and pylint worked on my mac
  • setting up a similar virtualenv on a RHEL 7.5 EC2 instance also worked (“worked” meaning pylint ran as expected)
  • pylint in  a python 3.6 virtualenv on my Qubes (linux) desktop also worked

OK, so this is something specific to this particular environment. The OS distro (Amazon Linux 2018.03) was the most obvious difference between the non-working and working environments. But by itself that’s not much of a clue – the OS should have almost nothing to do with the behavior of pip or a module inside a virtualenv.

Since the apparent issue was with the lazy_object_proxy module, I thought I’d try to install just that module and see what happened:

(pylint-test) [root@myhost ~]# pip --no-cache-dir install lazy-object-proxy
Collecting lazy-object-proxy
Downloading https://files.pythonhosted.org/packages/65/1f/2043ec33066e779905ed7e6580384425fdc7dc2ac64d6931060c75b0c5a3/lazy_object_proxy-1.3.1-cp36-cp36m-manylinux1_x86_64.whl (55kB)
100% |████████████████████████████████| 61kB 2.6MB/s
Installing collected packages: lazy-object-proxy
Successfully installed lazy-object-proxy-1.3.1
(pylint-test) [root@myhost ~]# echo $?
0
(pylint-test) [root@myhost ~]# pip list | grep lazy
(pylint-test) [root@myhost ~]#

Huh? It says it installs successfully, but immediately thereafter isn’t on the list of installed modules? Fishy. It seems like understanding what pip is doing with this module is the appropriate next step troubleshooting.

Digging in to pip and lazy_object_proxy

So if pip thinks it’s installing lazy_object_proxy, what’s it doing on the filesystem when it does so? Let’s look:

[root@myhost pylint-test]# find . -name lazy\*
./lib64/python3.6/dist-packages/lazy_object_proxy
./lib64/python3.6/dist-packages/lazy_object_proxy-1.3.1.dist-info

Hrm. So it is installing the module. How does this compare to the equivalent virtualenv on the RHEL 7.5 system where pylint works?

(pylint-test) [root@otherhost pylint-test]# find . -name lazy\*
./lib/python3.6/site-packages/lazy_object_proxy-1.3.1.dist-info
./lib/python3.6/site-packages/lazy_object_proxy

Aha – there’s a difference! On the (working) RHEL-7.5 system, lazy_object_proxy ends up in lib/python3.6/site-packages/, and on the Amazon Linux system with the non-functional pylint it’s in lib64/python3.6/dist-packages/. For reference, this blog post by Lee Mendelowitz does a nice job explaining some fundamentals of package loading, and of site-packages vs. dist-packages in general.

Just because we’ve found a difference doesn’t mean it’s a meaningful difference. In fact, it looks like all of the modules installed on Amazon Linux are in dist-packages rather than site-packages. (I find this surprising, since my understanding is that Amazon Linux is more or less RedHat derived, but that’s a different topic.)

And indeed, packages in dist-packages generally work fine on the Amazon Linux system. For example, the ‘six’ module:

(pylint-test) [root@myhost pylint-test]# pip list | grep six
six 1.11.0
(pylint-test) [root@myhost pylint-test]# python -i
Python 3.6.5 (default, Apr 26 2018, 00:14:31)
[GCC 4.8.5 20150623 (Red Hat 4.8.5-11)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import importlib.util
>>> importlib.util.find_spec('six')
ModuleSpec(name='six', loader=<_frozen_importlib_external.SourceFileLoader object at 0x7f1dbed43c88>, origin='/root/pylint-test/local/lib/python3.6/dist-packages/six.py')
>>> import six
>>>

This makes sense, because pylint-test/local/lib/python3.6/dist-packages is in the pythonpath (pylint-test/local/lib is a symlink to pylint-test/lib/):

>>> import sys
>>> print('\n'.join(sys.path))
/root/pylint-test/local/lib64/python3.6/site-packages
/root/pylint-test/local/lib/python3.6/site-packages
/root/pylint-test/lib64/python3.6
/root/pylint-test/lib/python3.6
/root/pylint-test/lib64/python3.6/site-packages
/root/pylint-test/lib/python3.6/site-packages
/root/pylint-test/lib64/python3.6/lib-dynload
/root/pylint-test/local/lib/python3.6/dist-packages
/usr/lib64/python3.6
/usr/lib/python3.6
>>>

Problem Identified; Workaround

The problem with the lazy_object_proxy module is clear at this point: the module is being installed to a directory not in the module search path (sys.path). For some reason, pip installs it to ./lib64/python3.6/dist-packages/lazy_object_proxy, but neither lib64 nor local/lib64 (which symlinks to the former) is in the module path.

This suggests an easy workaround – manually adding the appropriate directory to the pythonpath should make the module loadable:

(pylint-test) [root@ip-172-31-44-121 pylint-test]# python -i
Python 3.6.5 (default, Apr 26 2018, 00:14:31)
[GCC 4.8.5 20150623 (Red Hat 4.8.5-11)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import lazy_object_proxy
Traceback (most recent call last):
File "", line 1, in
ModuleNotFoundError: No module named 'lazy_object_proxy'
>>>
(pylint-test) [root@ip-172-31-44-121 pylint-test]# PYTHONPATH="./lib64/python3.6/dist-packages/" python -i
Python 3.6.5 (default, Apr 26 2018, 00:14:31)
[GCC 4.8.5 20150623 (Red Hat 4.8.5-11)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import lazy_object_proxy
>>>

Speculation Re: Root Cause & Real Solution

While manually modifying the environment (PYTHONPATH) might be an effective workaround, it’s not really a solution. As I see it, a key question remains: why is pip installing a package outside of the module search path? Or perhaps pip is doing the right thing, and the question is why isn’t lib64/python3.6/dist-packages in the default module search path?

Is this a pip bug? An error in Amazon Linux’s python packaging? Something subtly wrong in the lazy_object_proxy packaging?

The pip docs don’t seem to address this question, or for that matter explain what determines the path a given module will end up in. They do suggest other possible workarounds (i.e. manually configuring pip install destinations), but those don’t seem more satisfying to me than the PYTHONPATH workaround, and don’t shed light on the remaining mystery.

I have a gut suspicion that the core of this issue is likely related to something specific to Amazon Linux – perhaps the logic in site.py that sets the default module search path. This is very much just a hunch, but the use of dist-packages (rather than site-packages) seems odd. It’s also strange that the out-of-the-box sys.path in Amazon Linux includes /root/pylint-test/local/lib/python3.6/dist-packages but not its lib64 equivalent. I started a thread on the AWS forum to see if anybody there can has a relevant insight.

 

A datacenter decommissioning last year gave me access to some old Netbotz (now APC) monitoring gear. I jumped at the chance to save it from the garbage, as I remembered it being a pretty slick way to get basic environmental data (temp, door switches, etc.) and it was a small physical footprint way to control a number of networked cameras at once.

My recollection was right, and while the management server appliance proved both annoying to work with and massive overkill for my home projects, I’ve had some luck scraping data out of its web UI.

I wrote a few python classes to make it reasonable to programmatically access the data generated by the cameras and sensors in my backyard chicken coop; you can see the end result at pachube.

The code is available here.  It’s driven by a small db schema that holds basic location info for the monitoring gear and metadata about sensors being watched (e.g. polling intervals and alert thresholds), and produces simple name/value pairs with sensor data satisfying given conditions.  By “conditions” here I mean things like “it’s time to report this sensor value” or “this value changed too much, so I’m reporting on it” — this is by no means meant to replace actual monitoring systems (though it could easily be used to interface with them).

While I wrote fairly thorough pydoc, the overall documentation is hardly extensive, so I’d be happy to answer questions if getting at netbotz data from python is interesting to anybody else.

 

Pachube provides an API-exposed store of sensor data from pretty much anything, but with a focus on sensor data.  It’s free to use and has what seems to be a pretty complete, reasonably designed, and easy to use API with a nicely flexible authorization model.

I’ve been working on some code to scrape data out of the Netbotz web UI (which does not have an easy to use API) and, as I got to working on the data storage backend for the scraper, remembered Pachube and decided to give it a try rather than reinventing what was effectively the same wheel.

I have generally positive impressions of Pachube after a couple of days messing around with it.  It was pretty easy to get going, and after stumbling into one quickly-fixed bug was up and sending data in from my prototype scraper pretty quickly.

I’m using the following python class to simplify the interaction.  It’s trivial, but I didn’t see anything similar already done, so am throwing it here for future google results:


#!/usr/bin/python

# This code is released into the public domain (though I'd be interested in seeing
#  improvements or extensions, if you're willing to share back).

import mechanize
import json
import time

class PachubeFeedUpdate:

  _url_base = "http://api.pachube.com/v2/feeds/"
  _feed_id = None
  _version = None
  ## the substance of our update - list of dictionaries with keys 'id' and 'current_value'
  _data = None
  ## the actual object we'll JSONify and send to the API endpoint
  _payload = None
  _opener = None

  def __init__(self, feed_id, apikey):
    self._version = "1.0.0"
    self._feed_id = feed_id
    self._opener = mechanize.build_opener()
    self._opener.addheaders = [('X-PachubeApiKey',apikey)]
    self._data = []
    self._payload = {}

  def addDatapoint(self,dp_id,dp_value):
    self._data.append({'id':dp_id, 'current_value':dp_value})

  def buildUpdate(self):
    self._payload['version'] = self._version
    self._payload['id'] = self._feed_id
    self._payload['datastreams'] = self._data

  def sendUpdate(self):
    url = self._url_base + self._feed_id + "?_method=put"
    try:
      self._opener.open(url,json.dumps(self._payload))
    except mechanize.HTTPError as e:
      print "An HTTP error occurred: %s " % e

Usage is pretty straightforward.  For example, assuming you have defined key (with a valid API key from Pachube) and feed (a few clicks in the browser once you’ve registered) it’s basically like:


pfu = PachubeFeedUpdate(feed,key)
# do some stuff; gather data, repeating as necessary for any number of datastreams
pfu.addDatapoint(<datastream_id>,<data_value>))
# finish up and submit the data
pfu.buildUpdate()
pfu.sendUpdate()

The resulting datapoints basically end up looking like they were logged at the time the sendUpdate() call is made.  In my situation, I want to send readings from a couple of dozen sensors each into their own Pachube datastream in one shot, so this works fine.  If, instead, for some reason you need to accumulate updates over time without posting them, you’d need to take a different approach.

 

I set out this weekend to get an Arduino board to control my Roomba.  (The Roomba has a great – and generally open – interface, and iRobot deserves significant credit for encouraging creative repurposing/extensions of their products.)  I’ve got a few project ideas in mind, but for an initial step just wanted to verify that the Arduino could a) send control commands (“move forward”, “turn right”, etc.) from the Arduino, and b) read sensor data (“something is touching my left bumper”, “I’m about to fall down the stairs”).  This post contains my notes, which hopefully will help others doing this sort through some of the issues in a bit less time than that I spent.  Continue reading »

 

I’ve been playing with Arduino boards in my limited spare time over the past few months.  It’s a fun way to spend quality hands-on geek time that is clearly distinct (at least to me) from my day job.  Plus, I’m able to start actually instantiating some of the ubiquitious computing / distributed sensor ideas that have been floating around in my head.

I’ve been working on a simple wireless light, temp, and motion sensor.  Light was a trivial CDS photocell connected to the analog port of the arduino.  My first attempt at temp is using the Dallas Semiconductor DS-18B20 digital one-wire sensor, which is pretty slick for $4.25.

There was some good sample code on the main arduino site, but I spent a small bit of time to flesh it out more completely, adding the ability to configure sensor resolution and extracting the temp value from the returned data.  Code is here, if this is interesting or useful to you.

 

I was going through an old pile of paper in my office recently and encountered a set of note cards I’d accumulated years ago, back in the very small, scrappy, it-definitely-might-not-make-it startup stage. Most of the cards contained miscellaneous reminders, todos, or ideas I thought worthy of further exploration.

A few of the cards, though, had the record of an idiom mixing game my colleagues and I played back in the day (I’ve had the distinct pleasure of working with strongly multi-disciplinary and linguistically inclined geeks).

At its basic level, the game produced comprehensible phrases that amusingly combined two familiar idioms, such as “there are other fish to skin”, or “there are other cats in the sea”.

These are good for a chuckle, but not fundamentally anything more than language slapstick. Some combined idioms of similar intent in ways that made more vibrant images than did the originals, such as

“that opens up a whole new can of monkeys”

(A “can of worms” is one thing, but monkeys make everything funnier.) Rather than just “getting ducks in a row” or having things “fall in line”, we had

“all the ducks are falling in line”

Other are amusing but confusing, and almost seem as if they mean something, at least until you actually think about them. Example:

“Happier than a clam in pigsh*t”

The pinnacle of our mixed idiom game, though, were those hard to find combinations whose meanings were a novel blend of the original idioms. Most of these tended to mockingly riff on various elements of commonly accepted corporate-speak.

“I’m just putting them on the table as I see them”

for example, takes the casual (if sometimes cowardly) innocence of the defensive verbal communication standby “I’m just calling them as I see them” with the trite business-ese of “putting something on the table” to create an all-new description of an impetuous laziness thrust upon others.

Better still, in my opinion, is the lighthearted cynical foreshadowing of:

“We’ll burn that bridge when we come to it”

But my favorite, by far, is a sadly apt commentary on organizational politics gone awry:

“I dropped the ball in your court”

Have more? Oh yes you do … comment away!

 

Thanks to Amy and JB‘s motivational dissing of our old (and oft-broken) mythtv installation, I set out last weekend to rebuild the setup, which involves a backend system in the basement and a frontend in the TV room driven by an xbox that can retrieve recorded video from the myth system downstairs.

I had a very easy time getting current myth (0.20) installed on a SuSE 10.2 box with a Hauppauge PVR-350 last weekend, and as I’ve come to expect in 4+ years of myth use, 0.20 is noticeably better than 0.18. Since the xbox scripts that interface with myth are version-specific, I needed to update them too. and this was enough motivation for me to go ahead and get a modern XBMC install.

It’s all working now, and it seems pretty cool.

I’ll spare you the particulars, but there were 2 non-obvious problems I encountered along the way, the corresponding solutions to which I thought I’d leave here where google can find them:

problem #1: database connection errors from the xbmcmythtv script. This was puzzling as I’d verified that the host-level packet filters on the server running mysqld were allowing traffic, I was seeing successful TCP connections, and I had verified from a different machine on the network could connect to the target db using the username+password I had configured xmbcmythtv to use. Since the xbox has relatively few other diagnostic capabilities, I used ethereal wireshark to watch more closely, and found a mysql auth error being sent back by the server that read:

Client does not support authentication protocol requested by server; consider upgrading MySQL client

solution #1: use pre-mysql-4.1 style password encoding on the server. With the specific error string (unhelpfully obscured by the xbmc script), google quickly found this note in the MySQL reference manual.

problem #2: script says “caching subtitles” when I try to play a recorded show, then appears to hang for a while before returning to the program listing. This one was quite a head-scratcher for a while, since I wasn’t trying to do anything with subtitles, I couldn’t find any caching options that seemed related, and there was no other indication of something that might be failing (permissions, etc.). What’s more, this problem was coming up after I was successfully getting the list of recorded programs, which meant the xbox was successfully talking to the backend server (mysql, smb, and the myth backend process are all on the same box).

I found some threads on various xbox fora that described very similar problems, but none with solutions that were even potentially relevant.

solution #2: use IPs in the xbmcmythtv config. I had been using the FQDN of my backend box in both the db and general paths config screens of xbmcmythtv. The xbox is configured to use an internal DNS server that is authoritative for the domain in question, and to remove all doubt that DNS was actually working, the DB connection and connection to the mythtv backend worked fine (as evidenced by my ability to retrieve the program listing).

While desperately searching for clues on the “caching subtitles” problem, I found a mention in some random “common problems” document that emphasized that unqualified hostnames (i.e. missing the full domain) in the xmbcmythtv config would not work. I was already using FQDNs, but on a lark I tried replacing the FQDN with the IP of the backend server in the SMB path part of the config, and sure enough, that did the trick.

Apologies for what was certainly a very boring post for, well, anybody who came here except via a search for one of the aforementioned problems. For those of you who did get here looking for answers, I hope this helped.

 

I had the following dialogue with my 3 year old son after I got home from work last night:

Ben: “What does contrary mean?”
Me: “It means inclined to disagree.”
Ben: “No. No it doesn’t. Don’t say that.”

 

Bonsai
For those of you who are in the Madison area, the Badger Bonsai Society (of which I’ve been member since I started working with bonsai in 2004) has its annual show this weekend.  The show is Saturday and Sunday (May 19th & 20th) from 10a – 4pm at Olbrich Botanical Gardens on Atwood.  Admission is free, though donations are accepted.  Stop by and check out some cool trees in pots for a few minutes during your weekend.  I’ll have four trees in the show this year for the first time, as well.

For those interested in delving a bit deeper, there will be demonstrations at 11am and 2pm on both days.

(photo courtesy of Grufnik via the Creative Commons license)

© 2021 layer8 Suffusion theme by Sayontan Sinha