Tuesday, April 14, 2015
Wednesday, April 8, 2015
The code I wrote was tied up with the project I was working on then, so I finally made the time to sit down and extract the minimal code to get an LED blinking on a Nucleo development board. Then there was finding a nice, clean template to work with GitHub Pages, and the acquisition of a suitable domain name. Over the course of the series, I'll be adding back in the bits of functionality I figured out originally, beginning with task switching.
It's going to be a work in progress for quite a few months, but feel free to check it out and contact me if you have comments. http://embedded.guide/
Tuesday, July 3, 2012
One of the giveaways at this year's Google I/O conference was the newly announced Nexus Q streaming media player. I was particularly looking forward to tinkering with this device, because it seems like it has a lot of potential to be unlocked (it runs Android ICS, and during the keynote they referred to the USB port as being for "general hackability").
Because I had so much else to carry (my camera equipment, laptop, Nexus 7 and Galaxy Nexus were in my carry-on backpack), I checked my suitcase, which had the Nexus Q and the Chromebox in it, as well as the Sphero I bought for my four-year-old son.
The travel experience was pretty bad. I had arrived in good time for my 3pm flight home via Chicago, but at the gate it was delayed to 4pm, because they didn't have any flight crew available, meaning that I was going to miss my connection. United told me that I'd be stuck in Chicago until the next morning, and reluctantly admitted that they'd have to arrange a hotel room for the night. (But not themselves — they said I'd have to find someone to arrange that when I got there.)
Sitting on the plane, we all watched in bemusement as standby passengers were repeatedly shuffled on and off, and out of the window I saw pets being dropped off (three times, as the mobile conveyor belt had disappeared), and then taken away again on a baggage cart. It all seemed like a bit of a circus. Eventually it was announced that the flight would be further delayed until 5pm, and then as 5pm passed they told us that there were dents in the plane, the depth of which they needed to measure, and that they'd have a maintenance decision by 6pm. I got off the plane to get a drink and stretch my legs, and was back in my seat just in time for them to tell us that they were pulling the plane out of service. On the way out they gave me a slip of paper with a number to call, which I did, and was offered a 10pm flight via Philadelphia on US Airways, getting me home at 9am the next morning. I took it, as it was my best option, but as I was at work the next day I ended up having to be up for 30 hours straight.
When I arrived at my home airport, I picked up my bag from the United flight it had come in on, but when I got it home and unpacked, I discovered with horror that my Nexus Q was missing, having been replaced by a Notice of Inspection from Covenant Aviation Security. CAS only seem to accept complaints by mail, fax or leaving a voicemail, so I'm starting with the people I can talk to on the phone first.
Also missing were: the Sphero charger, which means my son only got about ten minutes of playing with that before I had to tell him we'd need to buy a new charger; the chargers for my shaver and beard trimmer, which can't be replaced on their own, so I'll have to buy whole new ones; my iPad desk stand; and a bottle of heartburn pills. Thankfully, the Sphero itself and the Chromebox were not taken.
I almost managed to put in a claim with United, but at the last moment the agent told me that he couldn't submit it, because they won't take responsiblity for my bag due to me flying with US -- despite my bag flying with United! Are US going to tell me they're not responsible because the they didn't transport the bag?
This is a sad and frustrating ending to what had been a really great week.
Wednesday, May 4, 2011
But what does it say about a person if they don't? Not necessarily the opposite. In my case, programming is a passion. I've been doing it for fun since I was about five years old. I've been lucky that I could take something I enjoy and turn it into a career, but the day job and the programming I do at home are very different things. My day job is about deadlines, requirements, standardized platforms and change control. They're about the mechanics of delivering products as much as they are about the creativity of writing software. So it's nice to come home and spend some of my increasingly rare free time (I have a wife and a three-year-old) just experimenting and learning.
There's nothing really wrong with that, but there's always room for growth, and I see benefits to myself in 'putting myself out there'. I've recently embarked on a couple of longer-term personal projects. One of them is yielding a Werkzeug-based web app framework as an artifact, and I do intend to release that as open source eventually, even though for the moment it's easier for me to keep it in sync by developing it in the app's private repository.
All of this led me to conceive of the following analogy. Don't think about it too much, though, or it will fall apart.
Some programmers are like rock stars. They create a lot of content that they release with their own name attached to it, and it's a name people in the community know well. Their notability comes with exposure to direct criticism, and popular opinion of them can bias the reception of their work.
Other programmers are more like session musicians. You've probably never heard of them, but they've contributed professionally to many projects. You might even have unknowingly experienced their work as part of a larger product.
Monday, February 7, 2011
The management of this IP address space is delegated across a number of different organisations. At the top level is IANA (the Internet Assigned Numbers Authority), which is part of ICANN (the Internet Corporation for Assigned Names and Numbers). IANA's role it is to oversee global IP address allocation, and in the early days of the Internet, IANA would directly provide IP address to the organisations that would use them. Between 1993 and 2005, five Regional Internet Registries (RIRs) became responsible for allocations within continental-scale regions. These regions are:
* African Network Information Centre (AfriNIC) for Africa
* American Registry for Internet Numbers (ARIN) for the United States, Canada, and several parts of the Caribbean region
* Asia-Pacific Network Information Centre (APNIC) for Asia, Australia, New Zealand, and neighboring countries
* Latin America and Caribbean Network Information Centre (LACNIC) for Latin America and parts of the Caribbean region
* RIPE NCC for Europe, the Middle East, and Central Asia
In the APNIC and LACNIC regions, allocation is further delegated to National Internet Registries (NIRs) who will further delegate to Local Internet Registries (LIRs), who are ISPs or other large organisations who need control over their own routing. In the other regions, the RIRs directly delegate to LIRs. (My experience was with operating a LIR in the RIPE region.) At each level, the allocations are smaller. IANA allocates /8 blocks to RIRs (about 16 million addresses), whereas LIRs receive a default initial /19 allocation (8192 addressess).
Even after all these levels of allocation, the IP addresses are still not considered to be in use. Although an LIR can announce all of its *allocated* ranges, it is expected to formally *assign* parts of those ranges to its customers. The upshot of this is that despite Thursday's announcement, end-users will still be receiving assignments... but only for a few more months.
What happens then? Ultimately, we're going to have to move to IPv6, which, as well as having a 128-bit address space, provides a number of other benefits, such as auto-configuration and improved address mobility. Although IPv6 seems new, the first deployments were in 1999, and it has had extensive testing. For a typical end-user, the transition shouldn't be difficult, as all the major operating systems have good IPv6 support. More advanced users can start using IPv6 now, if they want. Even if your provider doesn't support it, you can use a free tunnel provider such as Hurricane Electric or SixXS. There are also transition mechanisms such as Toredo (which tunnels IPv6 in UDP in IPv4, so can be used through NAT gateways) and 6to4 (which tunnels IPv6 in IPv4, so requires a public IPv4 address). The problem lies with any ISPs who don't have a clear IPv6 deployment strategy. While your desktop computer got its IPv6 support through an OS upgrade, the high-speed routers that ISP networks run on need hardware support. Some ISPs are already offering IPv6 to their customers, and others are beginning trials, but some haven't announced any timeline. The one thing that is clear is that the coming months will see a mix of organisations that sail through the transition and those that find themselves in an 11th-hour panic.
Friday, November 27, 2009
__import__function doesn't allow you to do this, because the
localsargument is ignored, so I looked for another way. Before I describe the method I came up with, here's how you might use it:
import elixir from inject import module_inject module_inject('myapp.models', elixir) import myapp.modelsEasy!
PEP 302 describes the import hooks that have been available since Python 2.3, and defines an import protocol. By adding an object with
sys.meta_path, you can get hooked into the import process.
find_moduleis called with the module name to see if an object knows how to load it.
load_moduleis then called to do the actual loading. The class below implements both of those methods.
class InjectionLoader(object): def __init__(self, name, dicts): self.name = name self.dicts = dicts def find_module(self, fullname, path=None): if fullname == self.name: return self def load_module(self, fullname): # Get the leaf module name and the directory it should be found in if '.' in fullname: package, leaf = fullname.rsplit('.', 1) path = sys.modules[package].__path__ else: leaf = fullname path = None # Open the module file file, filename, description = imp.find_module(leaf, path) # Get the existing module or create a new one (for reload to work) module = sys.modules.setdefault(fullname, imp.new_module(fullname)) module.__file__ = filename module.__loader__ = self code = compile(file.read(), filename, 'exec') # Populate the module namespace with the injected attributes for d in self.dicts: module.__dict__.update(d) # Finally execute the module with its injected attributes eval(code, module.__dict__) return moduleIt's instantiated with the module name it's injecting to, and the dicts it is injecting. To make it easier to use, I wrote a helper function,
module_inject. It takes a module name, and one or more dicts or modules. Dicts are injected as-is. Modules have their
__dict__s injected, but only those attributes listed in the module's
__all__attribute, or if that isn't present then only those that don't begin with a double underscore, are used. This is like doing a
from module import *at the beginning of the imported module. Here is its implementation:
def module_inject(name, *args): """Set a hook so that when module 'name' is imported, it is executed with the attributes in 'args' already in module scope. The arguments can be dictionaries or modules (see 'normalize_dict').""" args = map(normalize_dict, args) sys.meta_path.append(InjectionLoader(name, args)) def normalize_dict(d): """If the argument is a module, return the module's dictionary filtered by the module's __all__ attribute, otherwise return the argument as-is. If the module doesn't have an __all__ attribute, use all the attributes that don't begin with a double underscore.""" if isinstance(d, types.ModuleType): keys = getattr( d, '__all__', filter(lambda k: not k.startswith('__'), d.__dict__.keys()) ) d = dict([(key, d.__dict__[key]) for key in keys]) return dIt's something to be used with caution, though. In general, the Python mantra of *explicit is better than implicit* is a good guideline to follow.
Update: somebody asked me about the use of
fileas a local variable. I'm actually torn on the issue. Yes, it does shadow the built-in file function, but on the other hand it's concise, and it's the same name used in the Python documentation.
Saturday, October 3, 2009
self.foo = fooin Python
__init__methods, and wonder if it couldn't be done automatically. I came up with the following function to do just that, but I doubt I'll ever use it myself, because it goes against the *explicit is better than implicit* philosophy of Python.
#!/usr/bin/env python import inspect def init_from_args(): frame = inspect.stack() code = frame.f_code var_names = code.co_varnames # __init__'s parameters and locals init_locals = frame.f_locals # __init__'s dict of locals num_args = code.co_argcount # Number of arguments arg_names = var_names[1:num_args] # Positional argument names # If there's a **kwargs parameter, get the name of it kw_name = None if code.co_flags | 12: kw_name = var_names[num_args + 1] elif code.co_flags | 8: kw_name = var_names[num_args] # Copy the positional arguments for name in arg_names: setattr(init_locals[var_names], name, init_locals[name]) # If there was a **kwargs parameter, copy the keywork arguments. if kw_name: for name, value in init_locals[kw_name].items(): setattr(init_locals[var_names], name, value) class Foo: def __init__(self, a, b, *args, **kwargs): init_from_args() bar = 123 baz = "hello" quux = "foo" if __name__ == "__main__": foo = Foo(1, 2, 3, something="something else") print foo.__dict__