If it won't be simple, it simply won't be. [Hire me, source code] by Miki Tebeka, CEO, 353Solutions
Saturday, March 26, 2011
A Whirlwind of Python
I gave a fast paced talk about Python at work, you can download it here. (Unzip and open index.html)
Inside the zip file are also the files used to create the talk. I've used s5 as framework and generated slides from Python source files using Pygments.
Thursday, March 03, 2011
Convert oggs to mp3 - the fast way
I've created a quick script to convert all .ogg files in a given directory to .mp3 (using oggdec and lame). However it was running too slow to my taste, a good excuse to play with multiprocessing.Pool. On a short list of 4 .ogg files the processing time went from 32 seconds to 12 seconds.
Wednesday, March 02, 2011
Adding XML support to ctags
I use ctags a lot with Vim. It let me jump to definitions quickly.
At work we have a lot of spring XML configuration files, and it was pretty easy to add support for XML to ctags. Just add the following to ~/.ctags.
At work we have a lot of spring XML configuration files, and it was pretty easy to add support for XML to ctags. Just add the following to ~/.ctags.
--langdef=XML --langmap=XML:.xml --regex-XML=/id="([a-zA-Z0-9_]+)"/\1/d,definition/
Friday, February 18, 2011
AppEngine Work Environment
I'm doing a little AppEngine project (boy, my wife is one tough customer :). On the way I've developed some scripts to help me with my work flow (which is mostly coding with Vim and trying stuff on the REPL).
I have python2.5 (which is the version AppEngine uses) at /opt/python2.5 and I've "pip installed" pyflakes and ipython. The AppEngine SDK is at /opt/google_appengine.
gae.py
repl.sh
run-local.sh
check.sh
push.sh
pypath.sh
I have python2.5 (which is the version AppEngine uses) at /opt/python2.5 and I've "pip installed" pyflakes and ipython. The AppEngine SDK is at /opt/google_appengine.
gae.py
repl.sh
run-local.sh
check.sh
push.sh
pypath.sh
Monday, January 31, 2011
JSON decorator for CherryPy
#!/usr/bin/env python
from functools import wraps
import json
from cherrypy import response, expose
def jsonify(func):
'''JSON decorator for CherryPy'''
@wraps(func)
def wrapper(*args, **kw):
value = func(*args, **kw)
response.headers["Content-Type"] = "application/json"
return json.dumps(value)
return wrapper
def example():
from cherrypy import quickstart
from datetime import datetime
class Time:
@expose
@jsonify
def index(self):
now = datetime.now()
return {
"date" : now.strftime("%Y-%m-%d"),
"time" : now.strftime("%H:%M"),
"day" : now.strftime("%A"),
}
quickstart(Time())
if __name__ == "__main__":
example()
$curl -i localhost:8080 HTTP/1.1 200 OK Date: Mon, 31 Jan 2011 03:18:34 GMT Content-Length: 56 Content-Type: application/json Server: CherryPy/3.1.2 {"date": "2011-01-30", "day": "Sunday", "time": "19:18"}
Thursday, January 20, 2011
Having hg output svn diffs
In my current workplace we use Subversion as the main VCS. I use Mercurial on top of it to get easy feature branches. My problem was that we use svn patches in our review process, and mercurial (hg diff -r default) gave me different format patches. The solution (as suggested by durin42 on #mercurial) was to use hgsubversion.
After installing hgsubversion using pip (or easy_install), you need to add it to your ~/.hgrc:
Then you can create patches using "hg diff -svn -r default", I use the following genpatch script for that:
After installing hgsubversion using pip (or easy_install), you need to add it to your ~/.hgrc:
[extensions]
hgsubversion =
hgsubversion =
Then you can create patches using "hg diff -svn -r default", I use the following genpatch script for that:
#!/bin/bash
# Generate svn style diff for current hg feature branch
branch=$(hg branch)
if [ -z $branch ]; then
echo "error: not a mercurial repo" 1>&2
exit 1
fi
if [ "$branch" == "default" ]; then
echo "error: in default branch" 1>&2
exit 1
fi
hg diff --svn -r default > ${branch}.patch
Sunday, January 16, 2011
gcalc - Command line interface to Google calculator
#!/usr/bin/env python
'''Command line interface to Google calculator
gcalc 100f c -> 37.7777778 degrees Celsius
'''
# Idea taken from http://bit.ly/dVL4H3
import json
from urllib import urlopen
import re
def main(argv=None):
import sys
from optparse import OptionParser
argv = argv or sys.argv
parser = OptionParser("%prog FROM TO\n\t%prog 100f c")
opts, args = parser.parse_args(argv[1:])
if len(args) != 2:
parser.error("wrong number of arguments") # Will exit
url = "http://www.google.com/ig/calculator?q=%s=?%s" % tuple(args)
try:
# We decode to UTF-8 since Google sometimes return funny stuff
result = urlopen(url).read().decode("utf-8", "ignore")
# Convert to valid JSON: {foo: "1"} -> {"foo" : "1"}
result = re.sub("([a-z]+):", '"\\1" :', result)
result = json.loads(result)
except (ValueError, IOError), e:
raise SystemExit("error: %s" % e)
if result["error"]:
raise SystemExit("error: %s" % result["error"])
print result["rhs"]
if __name__ == "__main__":
main()
Friday, December 31, 2010
svncommit
#!/bin/bash
# Template for "svn commit"
# Add "export SVN_EDITOR=/path/to/this/file" to your .bashrc
# Fail one first error
set -e
filename=$1
editor=${EDITOR-vim}
mv $filename /tmp
# The template
cat << EOF > $filename
Bug #
Reviewed-by:
EOF
# Add file list to template
cat /tmp/$filename >> $filename
mtime=$(stat -c %y $filename)
$editor $filename
# Restore old file so svn will see it didn't change
if [ "$(stat -c %y $filename)" == "$mtime" ]; then
mv -f /tmp/$filename $filename
fi
Wednesday, November 03, 2010
Delays
Sorry for long pause.
I just stared a new job which is not in Python. My interested was also shifted to Clojure.
I'll try to post here more from time to time, but the frequency be lower for the seen future.
In the meanwhile, keep on hacking!
I just stared a new job which is not in Python. My interested was also shifted to Clojure.
I'll try to post here more from time to time, but the frequency be lower for the seen future.
In the meanwhile, keep on hacking!
Tuesday, September 07, 2010
Getting last N items of iterator
islice allow you to get the first N items of an iterator, but since it doesn't accept negative indexes - you can't use it to get the last N. The solution is very simple - use deque.
Sunday, August 29, 2010
Configuring Ubuntu For Asus Eee
Lately I lost my main laptop (belonged to the workplace), my older one allows me to use her Eee for now. The Eee came with Windows 7 starter edition, which I grew tired of fast. I've decided to try Ubuntu on it. Trying not to cause too much damage, I used Wubi.
Installing the main distro went without a problem, and after a reboot I had default Gnome desktop up. What I'm describing next is my attempt to get as much screen real estate as possible. Note that I'm spending most of my time in Firefox and in the shell.
As you probably figured out, I am looking for a job right now. If you know someone who is hiring, please point them to my resume.
Installing the main distro went without a problem, and after a reboot I had default Gnome desktop up. What I'm describing next is my attempt to get as much screen real estate as possible. Note that I'm spending most of my time in Firefox and in the shell.
General Setup
- Delete the bottom panel (right click and "Delete This Panel")
- Auto hide the top panel (right click, "Properties" and "Autohide")
- Install "Gnome Do"
Firefox Setup
- Right click on the toolbar, "Customize..." and check "Use small icons"
- Click on the "View" menu and leave only the navigation toolbar
- Uncheck the "Status bar" in the "View" menu
- Install the "Hide Menubar" Firefox addon (clicking on "ALT" will show the menu)
- I use GMail and Google Reader, so "Better GMail" and "Better GReader" addons helped
- Install "Adblock Plus" to get more content inside web pages
- This is just a personal preference, but I think installing "Chromifox Extreme" theme freed up some space as well
Terminal Setup
In the "Edit" menu click on "Profile preferences". And on the "General" tab, uncheck "Show menubar by default on new terminals" (use SHIFT-F10 to get a context menu to enable it)Two More Things
If your kids play webkinz, install Google Chrome. It works much better there (there's always a scroll bar they can use).As you probably figured out, I am looking for a job right now. If you know someone who is hiring, please point them to my resume.
Saturday, August 21, 2010
Wednesday, July 07, 2010
curl and couchdb - a love story
CouchDB API is JSON over HTTP. Which makes curl my default tool to play with the database.
Here's a small demo:
Here's a small demo:
Friday, June 11, 2010
import_any
Import any file as a Python module. This is a nice big security risk, so beware ...
from types import ModuleType
def import_any(path, name=None):
module = ModuleType(name or "dummy")
execfile(path, globals(), module.__dict__)
return module
Wednesday, June 09, 2010
Thursday, June 03, 2010
Tagging last good build in git
git (and mercurial) has a nice feature that you can "move" tags (using -f). At Sauce Labs we use that to have a track of our last good build and the last deployed revision.
In our continuous integration system, we mark the last good build in the last step which is executed only if the test suite passed.
To deploy, we use the following script who also tags the last deployed version.
In our continuous integration system, we mark the last good build in the last step which is executed only if the test suite passed.
tag-ok-build.sh
#!/bin/bash
# Error on 1'st failure
set -e
tag=last-green-build
revision=$(git rev-parse HEAD)
echo "Tagging $commit as $tag"
git tag -f $tag $commit
git pull origin master
git push --tags origin master
To deploy, we use the following script who also tags the last deployed version.
deploy.sh
#!/bin/bash
# Deploy, meaning sync from last successful build
tag=last-green-build
# Fetch get the latest changes from remote but does not update working
# directory, just the index
git fetch --tags
# Merge to the last successful bulid
git merge ${tag}
# Tag currently deployed revision
git tag -f deployed ${tag}
git push --tags
Saturday, May 15, 2010
couchnuke
A simple script to "nuke" a couchdb server, use with care!
#!/bin/bash
# Delete all databases on a couchdb server
# Exit on 1'st error
set -e
if [ $# -gt 1 ]; then
echo "usage: $(basename $0) [DB_URL]"
exit 1
fi
URL=${1-http://localhost:5984}
echo -n "Delete *ALL* databases on $URL? [y/n] "
read ans
if [ "$ans" != "y" ]; then
exit
fi
for db in $(curl -s $URL/_all_dbs | tr '",[]' ' \n ');
do
echo "Deleting $db"
curl -X DELETE $URL/$db
done
Monday, April 12, 2010
Run a temporary CouchDB
One nice solution to isolate tests, it to point the code to a temporary database. Below is a script to launch a new instance of CouchDB on a given port.
#!/bin/bash
# A script to run a temporary couchdb instance
if [ $# -ne 1 ]; then
echo "usage: $(basename $0) PORT"
exit 1
fi
port=$1
base=$(mktemp -d)
dbdir=$base/data
config=$base/config.ini
mkdir -p $dbdir
cat <<EOF > $config
[httpd]
port = $port
[couchdb]
database_dir = $dbdir
view_index_dir = $dbdir
[log]
file = ${base}/couch.log
EOF
trap "rm -fr $base" SIGINT SIGTERM
echo "couchdb directory is $base"
couchdb -a $config
Friday, April 09, 2010
Sourcing a shell script
Sometime you want to emulate the action of "source" in bash, settings some environment variables.
Here's one way to do it:
Here's one way to do it:
from subprocess import Popen, PIPE
from os import environ
def source(script, update=1):
pipe = Popen(". %s; env" % script, stdout=PIPE, shell=True)
data = pipe.communicate()[0]
env = dict((line.split("=", 1) for line in data.splitlines()))
if update:
environ.update(env)
return env
Thursday, March 25, 2010
Showing Solr Search Results Usign jQuery
Solr is a wonderful product. However it does not have a integrated search page (there's one in the admin section, but it's nothing you want to expose).
Luckily, Solr supports JSON results and even JSOP, which means we can do all the code for searching in our web page.
Note that I'm using the Solr demo that comes with the distribution, your search results might have different attributes depending on the schema.
To run the Solr demo do extract the distribution, then
Luckily, Solr supports JSON results and even JSOP, which means we can do all the code for searching in our web page.
Note that I'm using the Solr demo that comes with the distribution, your search results might have different attributes depending on the schema.
<html>
<head>
<title>Solr Search</title>
</head>
<body>
<h3>Solr Search</h3>
Query: <input id="query" />
<button id="search">Search</button>
<hr />
<div id="results">
</div>
</body>
<script src="http://ajax.googleapis.com/ajax/libs/jquery/1.4.2/jquery.min.js"></script>
<script>
function on_data(data) {
$('#results').empty();
var docs = data.response.docs;
$.each(docs, function(i, item) {
$('#results').prepend($('<div>' + item.name + '</div>'));
});
var total = 'Found ' + docs.length + ' results';
$('#results').prepend('<div>' + total + '</div>');
}
function on_search() {
var query = $('#query').val();
if (query.length == 0) {
return;
}
var url='http://localhost:8983/solr/select/?wt=json&json.wrf=?&' +
'q=' + query;
$.getJSON(url, on_data);
}
function on_ready() {
$('#search').click(on_search);
/* Hook enter to search */
$('body').keypress(function(e) {
if (e.keyCode == '13') {
on_search();
}
});
}
$(document).ready(on_ready);
</script>
</html>
To run the Solr demo do extract the distribution, then
cd example
java -jar start.jar > /dev/null &
cd exampledocs
./post.sh *.xml
Subscribe to:
Posts (Atom)