Some software I've found useful in my research, occasionally with personal forks, but no major time investments. When I've put in some more serious work, the appriopriate tag is code.

Node

Node is a server-side JavaScript engine (i.e. it executes JavaScript without using a browser). This means that JavaScript developers can now develop tools in their native language, so it's not a surprise that the Bootstrap folks use Grunt for their build system. I'm new to the whole Node ecosystem, so here are my notes on how it works.

Start off by installing npm, the Node package manager. On Gentoo, that's:

# USE=npm emerge -av net-libs/nodejs


[Configure npm][npm-config] to make "global" installs in your personal space:

# npm config set prefix ~/.local/


Install the Grunt command line interface for building Bootstrap:

$npm install -g grunt-cli  That installs the libraries under ~/.local/lib/node_modules and drops symlinks to binaries in ~/.local/bin (which is already in my PATH thanks to my dotfiles). Clone Boostrap and install it's dependencies: $ git clone git://github.com/twbs/bootstrap.git
$cd bootstrap$ npm install


This looks in the local [package.json][] to extract a list of dependencies, and installs each of them under node_modules. Node likes to isolate its packages, so every dependency for a given package is installed underneath that package. This leads to some crazy nesting:

$find node_modules/ -name graceful-fs node_modules/grunt/node_modules/glob/node_modules/graceful-fs node_modules/grunt/node_modules/rimraf/node_modules/graceful-fs node_modules/grunt-contrib-clean/node_modules/rimraf/node_modules/graceful-fs node_modules/grunt-contrib-qunit/node_modules/grunt-lib-phantomjs/node_modules/phantomjs/node_modules/rimraf/node_modules/graceful-fs node_modules/grunt-contrib-watch/node_modules/gaze/node_modules/globule/node_modules/glob/node_modules/graceful-fs  Sometimes the redundancy is due to different version requirements, but sometimes the redundancy is just redundant :p. Let's look with npm ls. $ npm ls graceful-fs
bootstrap@3.0.0 /home/wking/src/bootstrap
├─┬ grunt@0.4.1
│ ├─┬ glob@3.1.21
│ │ └── graceful-fs@1.2.3
│ └─┬ rimraf@2.0.3
│   └── graceful-fs@1.1.14
├─┬ grunt-contrib-clean@0.5.0
│ └─┬ rimraf@2.2.2
│   └── graceful-fs@2.0.1
├─┬ grunt-contrib-qunit@0.2.2
│ └─┬ grunt-lib-phantomjs@0.3.1
│   └─┬ phantomjs@1.9.2-1
│     └─┬ rimraf@2.0.3
│       └── graceful-fs@1.1.14
└─┬ grunt-contrib-watch@0.5.3
└─┬ gaze@0.4.1
└─┬ globule@0.1.0
└─┬ glob@3.1.21
└── graceful-fs@1.2.3


Regardless of on-disk duplication, Node caches modules so a given module only loads once. If it really bothers you, you can avoid some duplicates by installing duplicated packages higher up in the local tree:

$rm -rf node_modules$ npm install graceful-fs@1.1.14
$npm install$ npm ls graceful-fs
bootstrap@3.0.0 /home/wking/src/bootstrap
├── graceful-fs@1.1.14  extraneous
├─┬ grunt@0.4.1
│ └─┬ glob@3.1.21
│   └── graceful-fs@1.2.3
├─┬ grunt-contrib-clean@0.5.0
│ └─┬ rimraf@2.2.2
│   └── graceful-fs@2.0.1
└─┬ grunt-contrib-watch@0.5.3
└─┬ gaze@0.4.1
└─┬ globule@0.1.0
└─┬ glob@3.1.21
└── graceful-fs@1.2.3


This is probably not worth the trouble.

Now that we have Grunt and the Bootstrap dependencies, we can build the distributed libraries:

$~/src/node_modules/.bin/grunt dist Running "clean:dist" (clean) task Cleaning dist...OK Running "recess:bootstrap" (recess) task File "dist/css/bootstrap.css" created. Running "recess:min" (recess) task File "dist/css/bootstrap.min.css" created. Original: 121876 bytes. Minified: 99741 bytes. Running "recess:theme" (recess) task File "dist/css/bootstrap-theme.css" created. Running "recess:theme_min" (recess) task File "dist/css/bootstrap-theme.min.css" created. Original: 18956 bytes. Minified: 17003 bytes. Running "copy:fonts" (copy) task Copied 4 files Running "concat:bootstrap" (concat) task File "dist/js/bootstrap.js" created. Running "uglify:bootstrap" (uglify) task File "dist/js/bootstrap.min.js" created. Original: 58543 bytes. Minified: 27811 bytes. Done, without errors.  Wohoo! Unfortunately, like all language-specific packing systems, npm has trouble installing packages that aren't written in its native language. This means you get things like: $ ~/src/node_modules/.bin/grunt
…
jekyll build was initiated.

Jekyll output:
Use --force to continue.

Aborted due to warnings.


Once everybody wises up and starts writing packages for Gentoo Prefix, we can stop worrying about installation and get back to work developing :p.

Posted
Package management

Lex Nederbragt posted a question about version control and provenance on the Software Carpentry discussion list. I responded with my Portage-based workflow, but C. Titus Brown pointed out a number of reasons why this approach isn't more widely used, which seem to boil down to “that sounds like more trouble than it's worth”. Because recording the state of a system is important for reproducible research, it is worth doing something to clean up the current seat-of-the-pants approach.

Figuring out what software you have intalled on your system is actually a (mostly) solved problem. There is a long history in the Linux ecosystem for package management systems that track installed packages and install new software (and any dependencies) automatically. Unfortunately, there is not a consensus package manager across distributions, with Debian-based distributions using apt, Fedora-based distributions using yum, …. If you are not the system administrator for your computer, you can either talk your sysadmin into installing the packages you need, or use one of a number of guest package managers (Gentoo Prefix, homebrew, …). The guest package managers also work if you're committed to an OS that doesn't have an existing native package manager.

Despite the existence of many high quality package managers, I know many people who continue to install significant amounts of software by hand. While this is sustainable for a handful of packages, I see no reason to struggle through manual installations (subsequent upgrades, dependencies, …) when existing tools can automate the procedure. A stopgap solution is to use language specific package managers (pip for Python, gem for Ruby, …). This works fairly well, but once you reach a certain level of complexity (e.g. integrating Fortran and C extensions with Python in SciPy), things get difficult. While language-specific packaging standards ease automation, they are not a substitute for a language-agnostic package manager.

Many distributions distribute pre-compiled, binary packages, which give fast, stable installs without the need to have a full build system on your local machine. When the package you need is in the official repository (or a third-party repository), this approach works quite well. There's no need to go through the time or effort of compiling Firefox, LaTeX, LibreOffice, or other software that I interact with as a general a user. However, my own packages (or actively developed libraries that use from my own software) are rarely available as pre-compiled binaries. If you find yourself in this situation, it is useful to use a package manager that makes it easy to write source-based packages (Gentoo's Portage, Exherbo's Paludis, Arch's packman, …).

With source-based packaging systems, packaging an existing Python package is usually a matter of listing a bit of metadata. With layman, integrating your local packages into your Portage tree is extremely simple. Does your package depend on some other package in another oddball language? Some wonky build tool? No problem! Just list the new dependency in your ebuild (it probably already exists). Source-based package managers also make it easy to stay up to date with ongoing development. Portage supports live ebuilds that build fresh checkouts from a project's version control repository (use Git!). There is no need to dig out your old installation notes or reread the projects installation instructions.

Getting back to the goals of reproducible research, I think that existing package managers are an excellent solution for tracking the software used to perform experiments or run simulations and analysis. The main stumbling block is the lack of market penetration ;). Building a lightweight package manager that can work easily at both the system-wide and per-user levels across a range of host OSes is hard work. With the current fractured packaging ecosystem, I doubt that rolling a new package manager from scratch would be an effective approach. Existing package managers have mostly satisfied their users, and the fundamental properties haven't changed much in over a decade. Writing a system appealing enough to drag these satisfied users over to your new system is probably not going to happen.

Portage (and Gentoo Prefix) get you most of the way there, with the help of well written specifications and documentation. However, compatibility and testing in the prefix configuration still need some polishing, as does robust binary packaging support. These issues are less interesting to most Portage developers, as they usually run Portage as the native package manager and avoid binary packages. If the broader scientific community is interested in sustainable software, I think effort channeled into polishing these use-cases would be time well spent.

For those less interested in adopting a full-fledged package manager, you should at least make some effort to package your software. I have used software that didn't even have a README with build instructions, but compiling it was awful. If you're publishing your software in the hopes that others will find it, use it, and cite you in their subsequent paper, it behooves you to make the installation as easy as possible. Until your community coalesces around a single package management framework, picking a standard build system (Autotools, Distutils, …) will at least make it easier for folks to install your software by hand.

Posted
Factor analysis

I've been trying to wrap my head around factor analysis as a theory for designing and understanding test and survey results. This has turned out to be another one of those fields where the going has been a bit rough. I think the key factors in making these older topics difficult are:

• “Everybody knows this, so we don't need to write up the details.”
• “Hey, I can do better than Bob if I just tweak this knob…”
• “I'll just publish this seminal paper behind a paywall…”

The resulting discussion ends up being overly complicated, and it's hard for newcomers to decide if people using similar terminology are in fact talking about the same thing.

Some of the better open sources for background has been Tucker and MacCallum's “Exploratory Factor Analysis” manuscript and Max Welling's notes. I'll use Welling's terminology for this discussion.

The basic idea of factor analsys is to model $d$ measurable attributes as generated by $k common factors and $d$ unique factors. With $n=4$ and $k=2$, you get something like:

Corresponding to the equation (Welling's eq. 1):

(1)$x=Ay+\mu +\nu$

The independent random variables $y$ are distributed according to a Gaussian with zero mean and unit variance ${𝒢}_{y}\left[0,I\right]$ (zero mean because constant offsets are handled by $\mu$; unit variance because scaling is handled by $A$). The independent random variables $\nu$ are distributed according to ${𝒢}_{\nu }\left[0,\Sigma \right]$, with (Welling's eq. 2):

(2)$\Sigma \equiv \text{diag}\left[{\sigma }_{1}^{2},\dots ,{\sigma }_{d}^{2}\right]$

The matrix $A$ (linking common factors with measured attributes $x\right)\mathrm{is}\mathrm{refered}\mathrm{to}\mathrm{as}\mathrm{the}\mathrm{factor}\mathrm{weights}\mathrm{or}\mathrm{factor}\mathrm{loadings}.\mathrm{Because}\mathrm{the}\mathrm{only}\mathrm{source}\mathrm{of}\mathrm{constant}\mathrm{offset}\mathrm{is}$\mathbf{\mu}$,\mathrm{we}\mathrm{can}\mathrm{calculate}\mathrm{it}\mathrm{by}\mathrm{averaging}\mathrm{out}\mathrm{the}\mathrm{random}\mathrm{noise}\left(\mathrm{Welling}\prime s\mathrm{eq}.6\right):\text{Unknown character}\mathrm{div}\mathrm{class}=\text{Unknown character}\mathrm{numberedEq}\text{Unknown character}\text{Unknown character}\text{Unknown character}\mathrm{span}\text{Unknown character}\left(3\right)\text{Unknown character}/\mathrm{span}\text{Unknown character}$$\mu =\frac{1}{N}{\sum }_{n=1}^{N}{x}_{n}$$\text{Unknown character}/\mathrm{div}\text{Unknown character}\mathrm{where}$N$\mathrm{is}\mathrm{the}\mathrm{number}\mathrm{of}\mathrm{measurements}\left(\mathrm{survey}\mathrm{responders}\right)\mathrm{and}$\mathbf{x}n$\mathrm{is}\mathrm{the}\mathrm{response}\mathrm{vector}\mathrm{for}\mathrm{the}$n^\text{th}$\mathrm{responder}.\mathrm{How}\mathrm{do}\mathrm{we}\mathrm{find}$\mathbf{A}$\mathrm{and}$\mathbf{\Sigma}$?\mathrm{This}\mathrm{is}\mathrm{the}\mathrm{tricky}\mathrm{bit},\mathrm{and}\mathrm{there}\mathrm{are}a\mathrm{number}\mathrm{of}\mathrm{possible}\mathrm{approaches}.\mathrm{Welling}\mathrm{suggests}\mathrm{using}\mathrm{expectation}\mathrm{maximization}\left(\mathrm{EM}\right),\mathrm{and}\mathrm{there}\prime s\mathrm{an}\mathrm{excellent}\mathrm{example}\mathrm{of}\mathrm{the}\mathrm{procedure}\mathrm{with}a\mathrm{colorblind}\mathrm{experimenter}\mathrm{drawing}\mathrm{colored}\mathrm{balls}\mathrm{in}\mathrm{his}\left[\mathrm{EM}\mathrm{notes}\right]\left[\mathrm{EM}\right]\left(\mathrm{to}\mathrm{test}\mathrm{my}\mathrm{understanding},I\mathrm{wrote}\text{Unknown character}a\mathrm{href}=\text{Unknown character}../../\mathrm{posts}/{\mathrm{Factor}}_{\mathrm{analysis}}/\mathrm{color}-\mathrm{ball}.\mathrm{py}\text{Unknown character}\text{Unknown character}\mathrm{color}-\mathrm{ball}.\mathrm{py}\text{Unknown character}/a\text{Unknown character}\right).\mathrm{To}\mathrm{simplify}\mathrm{calculations},\mathrm{Welling}\mathrm{defines}\left(\mathrm{before}\mathrm{eq}.15\right):\text{Unknown character}\mathrm{div}\mathrm{class}=\text{Unknown character}\mathrm{numberedEq}\text{Unknown character}\text{Unknown character}\text{Unknown character}\mathrm{span}\text{Unknown character}\left(4\right)\text{Unknown character}/\mathrm{span}\text{Unknown character}$$\begin{array}{rl}A\prime & \equiv \left[A,\mu \right]\\ y\prime & \equiv \left[{y}^{T},1{\right]}^{T}\end{array}$$\text{Unknown character}/\mathrm{div}\text{Unknown character}\mathrm{which}\mathrm{reduce}\mathrm{the}\mathrm{model}\mathrm{to}\text{Unknown character}\mathrm{div}\mathrm{class}=\text{Unknown character}\mathrm{numberedEq}\text{Unknown character}\text{Unknown character}\text{Unknown character}\mathrm{span}\text{Unknown character}\left(5\right)\text{Unknown character}/\mathrm{span}\text{Unknown character}$$x=A\prime y\prime +\nu$$\text{Unknown character}/\mathrm{div}\text{Unknown character}\mathrm{After}\mathrm{some}\mathrm{manipulation}\mathrm{Welling}\mathrm{works}\mathrm{out}\mathrm{the}\mathrm{maximizing}\mathrm{updates}\left(\mathrm{eq}\prime \mathrm{ns}16\mathrm{and}17\right):\text{Unknown character}\mathrm{div}\mathrm{class}=\text{Unknown character}\mathrm{numberedEq}\text{Unknown character}\text{Unknown character}\text{Unknown character}\mathrm{span}\text{Unknown character}\left(6\right)\text{Unknown character}/\mathrm{span}\text{Unknown character}$$\begin{array}{rl}A{\prime }^{\text{new}}& =\left({\sum }_{n=1}^{N}{x}_{n}E\left[y\prime \mid {x}_{n}{\right]}^{T}\right){\left({\sum }_{n=1}^{N}{x}_{n}E\left[y\prime y{\prime }^{T}\mid {x}_{n}\right]\right)}^{-1}\\ {\Sigma }^{\text{new}}& =\frac{1}{N}{\sum }_{n=1}^{N}\text{diag}\left[{x}_{n}{x}_{n}^{T}-A{\prime }^{\text{new}}E\left[y\prime \mid {x}_{n}\right]{x}_{n}^{T}\right]\end{array}$$\text{Unknown character}/\mathrm{div}\text{Unknown character}\mathrm{The}\mathrm{expectation}\mathrm{values}\mathrm{used}\mathrm{in}\mathrm{these}\mathrm{updates}\mathrm{are}\mathrm{given}\mathrm{by}\left(\mathrm{Welling}\prime s\mathrm{eq}\prime \mathrm{ns}12\mathrm{and}13\right):\text{Unknown character}\mathrm{div}\mathrm{class}=\text{Unknown character}\mathrm{numberedEq}\text{Unknown character}\text{Unknown character}\text{Unknown character}\mathrm{span}\text{Unknown character}\left(7\right)\text{Unknown character}/\mathrm{span}\text{Unknown character}$$\begin{array}{rl}E\left[y\mid {x}_{n}\right]& ={A}^{T}\left(A{A}^{T}+\Sigma {\right)}^{-1}\left({x}_{n}-\mu \right)\\ E\left[y{y}^{T}\mid {x}_{n}\right]& =I-{A}^{T}\left(A{A}^{T}+\Sigma {\right)}^{-1}A+E\left[y\mid {x}_{n}\right]E\left[y\mid {x}_{n}{\right]}^{T}\end{array}$$\text{Unknown character}/\mathrm{div}\text{Unknown character}\mathrm{Survey}\mathrm{analysis}===============\mathrm{Enough}\mathrm{abstraction}!\mathrm{Let}\prime s\mathrm{look}\mathrm{at}\mathrm{an}\mathrm{example}:\left[\mathrm{survey}\mathrm{results}\right]\left[\mathrm{survey}\right]:\text{Unknown character}\text{Unknown character}\text{Unknown character}\mathrm{import}\mathrm{numpy}\text{Unknown character}\text{Unknown character}\text{Unknown character}\mathrm{scores}=\mathrm{numpy}.\mathrm{genfromtxt}\left(\prime {\mathrm{Factor}}_{\mathrm{analysis}}/\mathrm{survey}.\mathrm{data}\prime ,\mathrm{delimiter}=\prime t\prime \right)\text{Unknown character}\text{Unknown character}\text{Unknown character}\mathrm{scores}\mathrm{array}\left(\left[\left[1.,3.,4.,6.,7.,2.,4.,5.\right],\left[2.,3.,4.,3.,4.,6.,7.,6.\right],\left[4.,5.,6.,7.,7.,2.,3.,4.\right],\left[3.,4.,5.,6.,7.,3.,5.,4.\right],\left[2.,5.,5.,5.,6.,2.,4.,5.\right],\left[3.,4.,6.,7.,7.,4.,3.,5.\right],\left[2.,3.,6.,4.,5.,4.,4.,4.\right],\left[1.,3.,4.,5.,6.,3.,3.,4.\right],\left[3.,3.,5.,6.,6.,4.,4.,3.\right],\left[4.,4.,5.,6.,7.,4.,3.,4.\right],\left[2.,3.,6.,7.,5.,4.,4.,4.\right],\left[2.,3.,5.,7.,6.,3.,3.,3.\right]\right]\right)scores\left[i,j\right]\mathrm{is}\mathrm{the}\mathrm{answer}\mathrm{the}i\mathrm{th}\mathrm{respondent}\mathrm{gave}\mathrm{for}\mathrm{the}j\mathrm{th}\mathrm{question}.\mathrm{We}\prime \mathrm{re}\mathrm{looking}\mathrm{for}\mathrm{underlying}\mathrm{factors}\mathrm{that}\mathrm{can}\mathrm{explain}\mathrm{covariance}\mathrm{between}\mathrm{the}\mathrm{different}\mathrm{questions}.\mathrm{Do}\mathrm{the}\mathrm{question}\mathrm{answers}\left($\mathbf{x}$\right)\mathrm{represent}\mathrm{some}\mathrm{underlying}\mathrm{factors}\left($\mathbf{y}$\right)?\mathrm{Let}\prime s\mathrm{start}\mathrm{off}\mathrm{by}\mathrm{calculating}$\mathbf{\mu}$:\text{Unknown character}\text{Unknown character}\text{Unknown character}\mathrm{def}{\mathrm{print}}_{\mathrm{row}}\left(\mathrm{row}\right):...\mathrm{print}\left(\prime \prime .\mathrm{join}\left(\prime :0.2f\prime .\mathrm{format}\left(x\right)\mathrm{for}x\mathrm{in}\mathrm{row}\right)\right)\text{Unknown character}\text{Unknown character}\text{Unknown character}\mathrm{mu}=\mathrm{scores}.\mathrm{mean}\left(\mathrm{axis}=0\right)\text{Unknown character}\text{Unknown character}\text{Unknown character}{\mathrm{print}}_{\mathrm{row}}\left(\mathrm{mu}\right)2.423.585.085.756.083.423.924.25\mathrm{Next}\mathrm{we}\mathrm{need}\mathrm{priors}\mathrm{for}$\mathbf{A}$\mathrm{and}$\mathbf{\Sigma}$.\text{Unknown character}\mathrm{span}\mathrm{class}=\text{Unknown character}\mathrm{createlink}\text{Unknown character}\text{Unknown character}\mathrm{MDP}\text{Unknown character}/\mathrm{span}\text{Unknown character}\mathrm{has}\mathrm{an}\mathrm{implementation}\mathrm{for}\text{Unknown character}a\mathrm{href}=\text{Unknown character}../../\mathrm{posts}/\mathrm{Python}/\text{Unknown character}\text{Unknown character}\mathrm{Python}\text{Unknown character}/a\text{Unknown character},\mathrm{and}\mathrm{their}\left[\mathrm{FANode}\right]\left[\right]\mathrm{uses}a\mathrm{Gaussian}\mathrm{random}\mathrm{matrix}\mathrm{for}$\mathbf{A}$\mathrm{and}\mathrm{the}\mathrm{diagonal}\mathrm{of}\mathrm{the}\mathrm{score}\mathrm{covariance}\mathrm{for}$\mathbf{\Sigma}$.\mathrm{They}\mathrm{also}\mathrm{use}\mathrm{the}\mathrm{score}\mathrm{covariance}\mathrm{to}\mathrm{avoid}\mathrm{repeated}\mathrm{summations}\mathrm{over}$n$.\text{Unknown character}\text{Unknown character}\text{Unknown character}\mathrm{import}\mathrm{mdp}\text{Unknown character}\text{Unknown character}\text{Unknown character}\mathrm{def}{\mathrm{print}}_{\mathrm{matrix}}\left(\mathrm{matrix}\right):...\mathrm{for}\mathrm{row}\mathrm{in}\mathrm{matrix}:...{\mathrm{print}}_{\mathrm{row}}\left(\mathrm{row}\right)\text{Unknown character}\text{Unknown character}\text{Unknown character}\mathrm{fa}=\mathrm{mdp}.\mathrm{nodes}.\mathrm{FANode}\left({\mathrm{output}}_{\mathrm{dim}}=3\right)\text{Unknown character}\text{Unknown character}\text{Unknown character}\mathrm{numpy}.\mathrm{random}.\mathrm{seed}\left(1\right)#\mathrm{for}\mathrm{consistend}\mathrm{doctest}\mathrm{results}\text{Unknown character}\text{Unknown character}\text{Unknown character}{\mathrm{responder}}_{\mathrm{scores}}=\mathrm{fa}\left(\mathrm{scores}\right)#\mathrm{common}\mathrm{factors}\mathrm{for}\mathrm{each}\mathrm{responder}\text{Unknown character}\text{Unknown character}\text{Unknown character}{\mathrm{print}}_{\mathrm{matrix}}\left({\mathrm{responder}}_{\mathrm{scores}}\right)-1.92-0.450.000.671.971.960.700.03-2.000.290.03-0.60-1.021.79-1.430.820.27-0.23-0.07-0.080.82-1.38-0.270.480.79-1.170.501.59-0.30-0.410.01-0.480.73-0.46-1.340.18\text{Unknown character}\text{Unknown character}\text{Unknown character}{\mathrm{print}}_{\mathrm{row}}\left(\mathrm{fa}.\mathrm{mu}.\mathrm{flat}\right)2.423.585.085.756.083.423.924.25\text{Unknown character}\text{Unknown character}\text{Unknown character}\mathrm{fa}.\mathrm{mu}.\mathrm{flat}==\mathrm{mu}#\mathrm{MDP}\mathrm{agrees}\mathrm{with}\mathrm{our}\mathrm{earlier}\mathrm{calculation}\mathrm{array}\left(\left[\mathrm{True},\mathrm{True},\mathrm{True},\mathrm{True},\mathrm{True},\mathrm{True},\mathrm{True},\mathrm{True}\right],\mathrm{dtype}=\mathrm{bool}\right)\text{Unknown character}\text{Unknown character}\text{Unknown character}{\mathrm{print}}_{\mathrm{matrix}}\left(\mathrm{fa}.A\right)#\mathrm{factor}\mathrm{weights}\mathrm{for}\mathrm{each}\mathrm{question}0.80-0.06-0.450.170.30-0.650.34-0.13-0.250.13-0.73-0.640.02-0.32-0.700.610.230.860.080.630.59-0.090.670.13\text{Unknown character}\text{Unknown character}\text{Unknown character}{\mathrm{print}}_{\mathrm{row}}\left(\mathrm{fa}.\mathrm{sigma}\right)#\mathrm{unique}\mathrm{noise}\mathrm{for}\mathrm{each}\mathrm{question}0.040.020.380.550.300.050.480.21\mathrm{Because}\mathrm{the}\mathrm{covariance}\mathrm{is}\mathrm{unaffected}\mathrm{by}\mathrm{the}\mathrm{rotation}$\mathbf{A}\rightarrow\mathbf{A}\mathbf{R}$,\mathrm{the}\mathrm{estimated}\mathrm{weights}$\mathbf{A}$\mathrm{and}\mathrm{responder}\mathrm{scores}$\mathbf{y}$\mathrm{can}\mathrm{be}\mathrm{quite}\mathrm{sensitive}\mathrm{to}\mathrm{the}\mathrm{seed}\mathrm{priors}.\mathrm{The}\mathrm{width}$\mathbf{\Sigma}$\mathrm{of}\mathrm{the}\mathrm{unique}\mathrm{noise}$\mathbf{\nu}$\mathrm{is}\mathrm{more}\mathrm{robust},\mathrm{because}$\mathbf{\Sigma}$\mathrm{is}\mathrm{unaffected}\mathrm{by}\mathrm{rotations}\mathrm{on}$\mathbf{A}$.\mathrm{Related}\mathrm{tidbits}===============\mathrm{Communality}-----------\mathrm{The}\left[\mathrm{communality}\right]\left[\right]$hi^2$\mathrm{of}\mathrm{the}$i^\text{th}$\mathrm{measured}\mathrm{attribute}$x_i$\mathrm{is}\mathrm{the}\mathrm{fraction}\mathrm{of}\mathrm{variance}\mathrm{in}\mathrm{the}\mathrm{measured}\mathrm{attribute}\mathrm{which}\mathrm{is}\mathrm{explained}\mathrm{by}\mathrm{the}\mathrm{set}\mathrm{of}\mathrm{common}\mathrm{factors}.\mathrm{Because}\mathrm{the}\mathrm{common}\mathrm{factors}$\mathbf{y}$\mathrm{have}\mathrm{unit}\mathrm{variance},\mathrm{the}\mathrm{communality}\mathrm{is}\mathrm{given}\mathrm{by}:\text{Unknown character}\mathrm{div}\mathrm{class}=\text{Unknown character}\mathrm{numberedEq}\text{Unknown character}\text{Unknown character}\text{Unknown character}\mathrm{span}\text{Unknown character}\left(8\right)\text{Unknown character}/\mathrm{span}\text{Unknown character}$${h}_{i}=\frac{{\sum }_{j=1}^{k}{A}_{\mathrm{ij}}^{2}}{{\sum }_{j=1}^{k}{A}_{\mathrm{ij}}^{2}+{\sigma }_{1}^{2}}$$\text{Unknown character}/\mathrm{div}\text{Unknown character}\text{Unknown character}\text{Unknown character}\text{Unknown character}{\mathrm{factor}}_{\mathrm{variance}}=\mathrm{numpy}.\mathrm{array}\left(\left[\mathrm{sum}\left(\mathrm{row}2\right)\mathrm{for}\mathrm{row}\mathrm{in}\mathrm{fa}.A\right]\right)\text{Unknown character}\text{Unknown character}\text{Unknown character}h=\mathrm{numpy}.\mathrm{array}\left(...\left[\mathrm{var}/\left(\mathrm{var}+\mathrm{sig}\right)\mathrm{for}\mathrm{var},\mathrm{sig}\mathrm{in}\mathrm{zip}\left({\mathrm{factor}}_{\mathrm{variance}},\mathrm{fa}.\mathrm{sigma}\right)\right]\right)\text{Unknown character}\text{Unknown character}\text{Unknown character}{\mathrm{print}}_{\mathrm{row}}\left(h\right)0.950.970.340.640.660.960.610.69\mathrm{There}\mathrm{may}\mathrm{be}\mathrm{some}\mathrm{scaling}\mathrm{issues}\mathrm{in}\mathrm{the}\mathrm{communality}\mathrm{due}\mathrm{to}\mathrm{deviations}\mathrm{between}\mathrm{the}\mathrm{estimated}$\mathbf{A}$\mathrm{and}\Sigma$ and the variations contained in the measured scores (why?):

>>> print_row(factor_variance + fa.sigma)
0.89   0.56   0.57   1.51   0.89   1.21   1.23   0.69
>>> print_row(scores.var(axis=0, ddof=1))  # total variance for each question
0.99   0.63   0.63   1.66   0.99   1.36   1.36   0.75


The proportion of total variation explained by the common factors is given by:

(9)$\frac{\sum _{i=1}^{k}{h}_{i}}{}$

Varimax rotation

As mentioned earlier, factor analysis generated loadings $A$ that are unique up to an arbitrary rotation $R$ (as you'd expect for a $k$-dimensional Gaussian ball of factors $y$). A number of of schemes have been proposed to simplify the initial loadings by rotating $A$ to reduce off-diagonal terms. One of the more popular approaches is Henry Kaiser's varimax rotation (unfortunately, I don't have access to either his thesis or the subsequent paper). I did find (via Wikipedia) Trevor Park's notes which have been very useful.

The idea is to iterate rotations to maximize the raw varimax criterion (Park's eq. 1):

(10)$V\left(A\right)=\sum _{j=1}^{k}\left(\frac{1}{d}\sum _{i=1}^{d}{A}_{\mathrm{ij}}^{4}-{\left(\frac{1}{d}\sum _{i=1}^{d}{A}_{\mathrm{ij}}^{4}\right)}^{2}\right)$

Rather than computing a $k$-dimensional rotation in one sweep, we'll iterate through 2-dimensional rotations (on successive column pairs) until convergence. For a particular column pair $\left(p,q\right)$, the rotation matrix ${R}^{*}$ is the usual rotation matrix:

(11)${R}^{*}=\left(\begin{array}{cc}\mathrm{cos}\left({\varphi }^{*}\right)& -\mathrm{sin}\left({\varphi }^{*}\right)\\ \mathrm{sin}\left({\varphi }^{*}\right)& \mathrm{cos}\left({\varphi }^{*}\right)\end{array}\right)$

where the optimum rotation angle ${\varphi }^{*}$ is (Park's eq. 3):

(12)${\varphi }^{*}=\frac{1}{4}\angle \left(\frac{1}{d}\sum _{j=1}^{d}{\left({A}_{\mathrm{jp}}+{\mathrm{iA}}_{\mathrm{jq}}\right)}^{4}-{\left(\frac{1}{d}\sum _{j=1}^{d}{\left({A}_{\mathrm{jp}}+{\mathrm{iA}}_{\mathrm{jq}}\right)}^{2}\right)}^{2}\right)$

where $i\equiv \sqrt{-1}$.

Nomenclature

${A}_{\mathrm{ij}}$
The element from the ${i}^{\text{th}}$ row and ${j}^{\text{th}}$ column of a matrix $A$. For example here is a 2-by-3 matrix terms of components:
(13)$A=\left(\begin{array}{ccc}{A}_{11}& {A}_{12}& {A}_{13}\\ {A}_{21}& {A}_{22}& {A}_{23}\end{array}\right)$
${A}^{T}$
The transpose of a matrix (or vector) $A$. ${A}_{\mathrm{ij}}^{T}={A}_{\mathrm{ji}}$
${A}^{-1}$
The inverse of a matrix $A$. ${A}^{-1}\stackrel{˙}{A}=1$
$\text{diag}\left[A\right]$
A matrix containing only the diagonal elements of $A$, with the off-diagonal values set to zero.
$E\left[f\left(x\right)\right]$
Expectation value for a function $f$ of a random variable $x$. If the probability density of $x$ is $p\left(x\right)$, then $E\left[f\left(x\right)\right]=\int dxp\left(x\right)f\left(x\right)$. For example, $E\left[p\left(x\right)\right]=1$.
$\mu$
The mean of a random variable $x$ is given by $\mu =E\left[x\right]$.
$\Sigma$
The covariance of a random variable $x$ is given by $\Sigma =E\left[\left(x-\mu \right)\left(x-\mu {\right)}^{T}\right]$. In the factor analysis model discussed above, $\Sigma$ is restricted to a diagonal matrix.
${𝒢}_{x}\left[\mu ,\Sigma \right]$
A Gaussian probability density for the random variables $x$ with a mean $\mu$ and a covariance $\Sigma$.
(14)${𝒢}_{x}\left[\mu ,\Sigma \right]=\frac{1}{\left(2\pi {\right)}^{\frac{D}{2}}\sqrt{\mathrm{det}\left[\Sigma \right]}}{e}^{-\frac{1}{2}\left(x-\mu {\right)}^{T}{\Sigma }^{-1}\left(x-\mu \right)}$
$p\left(y\mid x\right)$
Probability of $y$ occurring given that $x$ occured. This is commonly used in Bayesian statistics.
$p\left(x,y\right)$
Probability of $y$ and $x$ occuring simultaneously (the joint density). $p\left(x,y\right)=p\left(x\mid y\right)p\left(y\right)$
$\angle \left(z\right)$
The angle of $z$ in the complex plane. $\angle \left({\mathrm{re}}^{i\theta }\right)=\theta$.

Note: if you have trouble viewing some of the more obscure Unicode used in this post, you might want to install the STIX fonts.

Posted
catalyst

Available in a git repository.
Repository: catalyst-swc
Browsable repository: catalyst-swc
Author: W. Trevor King

Catalyst is a release-building tool for Gentoo. If you use Gentoo and want to roll your own live CD or bootable USB drive, this is the way to go. As I've been wrapping my head around catalyst, I've been pushing my notes upstream. This post builds on those notes to discuss the construction of a bootable ISO for Software Carpentry boot camps.

Getting a patched up catalyst

Catalyst has been around for a while, but the user base has been fairly small. If you try to do something that Gentoo's Release Engineering team doesn't do on a regular basis, built in catalyst support can be spotty. There's been a fair amount of patch submissions an gentoo-catalyst@ recently, but patch acceptance can be slow. For the SWC ISO, I applied versions of the following patches (or patch series) to 37540ff:

Configuring catalyst

The easiest way to run catalyst from a Git checkout is to setup a local config file. I didn't have enough hard drive space on my local system (~16 GB) for this build, so I set things up in a temporary directory on an external hard drive:

$cat catalyst.conf | grep -v '^#\|^$'
digests="md5 sha1 sha512 whirlpool"
contents="auto"
distdir="/usr/portage/distfiles"
envscript="/etc/catalyst/catalystrc"
hash_function="crc32"
options="autoresume kerncache pkgcache seedcache snapcache"
portdir="/usr/portage"
sharedir="/home/wking/src/catalyst"
snapshot_cache="/mnt/d/tmp/catalyst/snapshot_cache"
storedir="/mnt/d/tmp/catalyst"


I used the default values for everything except sharedir, snapshot_cache, and storedir. Then I cloned the catalyst-swc repository into /mnt/d/tmp/catalyst.

Portage snapshot and a seed stage

Take a snapshot of the current Portage tree:

# catalyst -c catalyst.conf --snapshot 20130208


# wget -O /mnt/d/tmp/catalyst/builds/default/stage3-i686-20121213.tar.bz2 \
>   http://distfiles.gentoo.org/releases/x86/current-stage3/stage3-i686-20121213.tar.bz2


Building the live CD

# catalyst -c catalyst.conf -f /mnt/d/tmp/catalyst/spec/default-stage1-i686-2013.1.spec
# catalyst -c catalyst.conf -f /mnt/d/tmp/catalyst/spec/default-stage2-i686-2013.1.spec
# catalyst -c catalyst.conf -f /mnt/d/tmp/catalyst/spec/default-stage3-i686-2013.1.spec
# catalyst -c catalyst.conf -f /mnt/d/tmp/catalyst/spec/default-livecd-stage1-i686-2013.1.spec
# catalyst -c catalyst.conf -f /mnt/d/tmp/catalyst/spec/default-livecd-stage2-i686-2013.1.spec


isohybrid

To make the ISO bootable from a USB drive, I used isohybrid:

# cp swc-x86.iso swc-x86-isohybrid.iso
# isohybrid iso-x86-isohybrid.iso


You can install the resulting ISO on a USB drive with:

# dd if=iso-x86-isohybrid.iso of=/dev/sdX


replacing replacing X with the appropriate drive letter for your USB drive.

With versions of catalyst after d1c2ba9, the isohybrid call is built into catalysts ISO construction.

Posted
SymPy

SymPy is a Python library for symbolic mathematics. To give you a feel for how it works, lets extrapolate the extremum location for $f\left(x\right)$ given a quadratic model:

(1)$f\left(x\right)=A{x}^{2}+Bx+C$

and three known values:

(2)$\begin{array}{rl}f\left(a\right)& =A{a}^{2}+Ba+C\\ f\left(b\right)& =A{b}^{2}+Bb+C\\ f\left(c\right)& =A{c}^{2}+Bc+C\end{array}$

Rephrase as a matrix equation:

(3)$\left(\begin{array}{c}f\left(a\right)\\ f\left(b\right)\\ f\left(c\right)\end{array}\right)=\left(\begin{array}{ccc}{a}^{2}& a& 1\\ {b}^{2}& b& 1\\ {c}^{2}& c& 1\end{array}\right)\cdot \left(\begin{array}{c}A\\ B\\ C\end{array}\right)$

So the solutions for $A$, $B$, and $C$ are:

(4)$\left(\begin{array}{c}A\\ B\\ C\end{array}\right)={\left(\begin{array}{ccc}{a}^{2}& a& 1\\ {b}^{2}& b& 1\\ {c}^{2}& c& 1\end{array}\right)}^{-1}\cdot \left(\begin{array}{c}f\left(a\right)\\ f\left(b\right)\\ f\left(c\right)\end{array}\right)=\left(\begin{array}{c}\text{long}\\ \text{complicated}\\ \text{stuff}\end{array}\right)$

Now that we've found the model parameters, we need to find the $x$ coordinate of the extremum.

(5)$\frac{\mathrm{d}f}{\mathrm{d}x}=2Ax+B\phantom{\rule{thickmathspace}{0ex}},$

which is zero when

(6)$\begin{array}{rl}2Ax& =-B\\ x& =\frac{-B}{2A}\end{array}$

Here's the solution in SymPy:

>>> from sympy import Symbol, Matrix, factor, expand, pprint, preview
>>> a = Symbol('a')
>>> b = Symbol('b')
>>> c = Symbol('c')
>>> fa = Symbol('fa')
>>> fb = Symbol('fb')
>>> fc = Symbol('fc')
>>> M = Matrix([[a**2, a, 1], [b**2, b, 1], [c**2, c, 1]])
>>> F = Matrix([[fa],[fb],[fc]])
>>> ABC = M.inv() * F
>>> A = ABC[0,0]
>>> B = ABC[1,0]
>>> x = -B/(2*A)
>>> x = factor(expand(x))
>>> pprint(x)
2       2       2       2       2       2
a *fb - a *fc - b *fa + b *fc + c *fa - c *fb
---------------------------------------------
2*(a*fb - a*fc - b*fa + b*fc + c*fa - c*fb)
>>> preview(x, viewer='pqiv')


Where pqiv is the executable for pqiv, my preferred image viewer. With a bit of additional factoring, that is:

(7)$x=\frac{{a}^{2}\left[f\left(b\right)-f\left(c\right)\right]+{b}^{2}\left[f\left(c\right)-f\left(a\right)\right]+{c}^{2}\left[f\left(a\right)-f\left(b\right)\right]}{2\cdot \left\{a\left[f\left(b\right)-f\left(c\right)\right]+b\left[f\left(c\right)-f\left(a\right)\right]+c\left[f\left(a\right)-f\left(b\right)\right]\right\}}$
Posted
One-off Git daemon

In my gitweb post, I explain how to setup git daemon to serve git:// requests under Nginx on Gentoo. This post talks about a different situation, where you want to toss up a Git daemon for collaboration on your LAN. This is useful when you're teaching Git to a room full of LAN-sharing students, and you don't want to bother setting up public repositories more permanently.

Serving a few repositories

Say you have a repository that you want to serve:

$mkdir -p ~/src/my-project$ cd ~/src/my-project
$git init$ …hack hack hack…


Fire up the daemon (probably in another terminal so you can keep hacking in your original terminal) with:

$cd ~/src$ git daemon --export-all --base-path=. --verbose ./my-project


Then you can clone with:

$git clone git://192.168.1.2/my-project  replacing 192.168.1.2 with your public IP address (e.g. from ip addr show scope global). Add additional repository paths to the git daemon call to serve additional repositories. Serving a single repository If you don't want to bother listing my-project in your URLs, you can base the daemon in the project itself (instead of in the parent directory): $ cd
$git daemon --export-all --base-path=src/my-project --verbose  Then you can clone with: $ git clone git://192.168.1.2/


This may be more convenient if you're only sharing a single repository.

Enabling pushes

If you want your students to be able to push to your repository during class, you can run:

$git daemon --enable=receive-pack …  Only do this on a trusted LAN with a junk test repository, because it will allow anybody to push anything or remove references. Posted PDF forms You can use pdftk to fill out PDF forms (thanks for the inspiration, Joe Rothweiler). The syntax is simple: $ pdftk input.pdf fill_form data.fdf output output.pdf


where input.pdf is the input PDF containing the form, data.fdf is an FDF or XFDF file containing your data, and output.pdf is the name of the PDF you're creating. The tricky part is figuring out what to put in data.fdf. There's a useful comparison of the Forms Data Format (FDF) and it's XML version (XFDF) in the XFDF specification. XFDF only covers a subset of FDF, so I won't worry about it here. FDF is defined in section 12.7.7 of ISO 32000-1:2008, the PDF 1.7 specification, and it has been in PDF specifications since version 1.2.

Forms Data Format (FDF)

FDF files are basically stripped down PDFs (§12.7.7.1). A simple FDF file will look something like:

%FDF-1.2
1 0 obj<</FDF<</Fields[
<</T(FIELD1_NAME)/V(FIELD1_VALUE)>>
<</T(FIELD2_NAME)/V(FIELD2_VALUE)>>
…
] >> >>
endobj
trailer
<</Root 1 0 R>>
%%EOF


Broken down into the lingo of ISO 32000, we have a header (§12.7.7.2.2):

%FDF-1.2


followed by a body with a single object (§12.7.7.2.3):

1 0 obj<</FDF<</Fields[
<</T(FIELD1_NAME)/V(FIELD1_VALUE)>>
<</T(FIELD2_NAME)/V(FIELD2_VALUE)>>
…
] >> >>
endobj


followed by a trailer (§12.7.7.2.4):

trailer
<</Root 1 0 R>>
%%EOF


Despite the claims in §12.7.7.2.1 that the trailer is optional, pdftk choked on files without it:

$cat no-trailer.fdf %FDF-1.2 1 0 obj<</FDF<</Fields[ <</T(Name)/V(Trevor)>> <</T(Date)/V(2012-09-20)>> ] >> >> endobj$ pdftk input.pdf fill_form no-trailer.fdf output output.pdf
Error: Failed to open form data file:
data.fdf
No output created.


Trailers are easy to add, since all they reqire is a reference to the root of the FDF catalog dictionary. If you only have one dictionary, you can always use the simple trailer I gave above.

FDF Catalog

The meat of the FDF file is the catalog (§12.7.7.3). Lets take a closer look at the catalog structure:

1 0 obj<</FDF<</Fields[
…
] >> >>


This defines a new object (the FDF catalog) which contains one key (the /FDF dictionary). The FDF dictionary contains one key (/Fields) and its associated array of fields. Then we close the /Fields array (]), close the FDF dictionary (>>) and close the FDF catalog (>>).

There are a number of interesting entries that you can add to the FDF dictionary (§12.7.7.3.1, table 243), some of which require a more advanced FDF version. You can use the /Version key to the FDF catalog (§12.7.7.3.1, table 242) to specify the of data in the dictionary:

1 0 obj<</Version/1.3/FDF<</Fields[…


Now you can extend the dictionary using table 244. Lets set things up to use UTF-8 for the field values (/V) or options (/Opt):

1 0 obj<</Version/1.3/FDF<</Encoding/utf_8/Fields[
<</T(FIELD1_NAME)/V(FIELD1_VALUE)>>
<</T(FIELD2_NAME)/V(FIELD2_VALUE)>>
…
] >> >>
endobj


pdftk understands raw text in the specified encoding ((…)), raw UTF-16 strings starting with a BOM ((\xFE\xFF…)), or UTF-16BE strings encoded as ASCII hex (<FEFF…>). You can use pdf-merge.py and its --unicode option to find the latter. Support for the /utf_8 encoding in pdftk is new. I mailed a patch to pdftk's Sid Steward and posted a patch request to the underlying iText library. Until those get accepted, you're stuck with the less convenient encodings.

Fonts

Say you fill in some Unicode values, but your PDF reader is having trouble rendering some funky glyphs. Maybe it doesn't have access to the right font? You can see which fonts are embedded in a given PDF using pdffonts.

$pdffonts input.pdf name type emb sub uni object ID ------------------------------------ ----------------- --- --- --- --------- MMXQDQ+UniversalStd-NewswithCommPi CID Type 0C yes yes yes 1738 0 MMXQDQ+ZapfDingbatsStd CID Type 0C yes yes yes 1749 0 MMXQDQ+HelveticaNeueLTStd-Roman Type 1C yes yes no 1737 0 CPZITK+HelveticaNeueLTStd-BlkCn Type 1C yes yes no 1739 0 …  If you don't have the right font for your new data, you can add it using current versions of iText. However, pdftk uses an older version, so I'm not sure how to translate this idea for pdftk. FDF templates and field names You can use pdftk itself to create an FDF template, which it does with embedded UTF-16BE (you can see the FE FF BOMS at the start of each string value). $ pdftk input.pdf generate_fdf output template.fdf
$hexdump -C template.fdf | head 00000000 25 46 44 46 2d 31 2e 32 0a 25 e2 e3 cf d3 0a 31 |%FDF-1.2.%.....1| 00000010 20 30 20 6f 62 6a 20 0a 3c 3c 0a 2f 46 44 46 20 | 0 obj .<<./FDF | 00000020 0a 3c 3c 0a 2f 46 69 65 6c 64 73 20 5b 0a 3c 3c |.<<./Fields [.<<| 00000030 0a 2f 56 20 28 fe ff 29 0a 2f 54 20 28 fe ff 00 |./V (..)./T (...| 00000040 50 00 6f 00 73 00 74 00 65 00 72 00 4f 00 72 00 |P.o.s.t.e.r.O.r.| …  You can also dump a more human friendly version of the PDF's fields (without any default data): $ pdftk input.pdf dump_data_fields_utf8 output data.txt
$cat data.txt --- FieldType: Text FieldName: Name FieldNameAlt: Name: FieldFlags: 0 FieldJustification: Left --- FieldType: Text FieldName: Date FieldNameAlt: Date: FieldFlags: 0 FieldJustification: Left --- FieldType: Text FieldName: Advisor FieldNameAlt: Advisor: FieldFlags: 0 FieldJustification: Left --- …  If the fields are poorly named, you may have to fill the entire form with unique values and then see which values appeared where in the output PDF (for and example, see codehero's identify_pdf_fields.js). Conclusions This would be so much easier if people just used YAML or JSON instead of bothering with PDFs ;). Posted Portage Portage is Gentoo's default package manager. This post isn't supposed to be a tutorial, the handbook does a pretty good job of that already. I'm just recording a few tricks so I don't forget them. User patches While playing around with LDAP, I was trying to troubleshoot the SASL_NOCANON handling. “Gee,” I thought, “wouldn't it be nice to be able to add debugging printfs to figure out what was happening?” Unfortunately, I had trouble getting ldapwhoami working when I compiled it by hand. “Grrr,” I though, “I just want to add a simple patch and do whatever the ebuild already does.” This is actually pretty easy to do, once you're looking in the right places. Write your patch I'm not going to cover that here. Place your patch where epatch_user will find it This would be under /etc/portage/patches/<CATEGORY>/<PF|P|PN>/  If your ebuild already calls epatch_user, or it uses an eclass like base that calls epatch_user internally, you're done. If not, read on… Forcing epatch_user While you could always write an overlay with an improved ebuild, a quicker fix for this kind of hack is /etc/portage/bashrc. I used: if [ "${EBUILD_PHASE}" == "prepare" ]; then
echo ":: Calling epatch_user";
pushd "${S}" epatch_user popd fi  to insert my patches at the beginning of the prepare phase. Cleaning up It's safe to call epatch_user multiple times, so you can leave this setup in place if you like. However, you might run into problems if you touch autoconf files, so you may want to move your bashrc somewhere else until you need it again! Posted DVD Backup I've been using abcde to rip our audio CD collection onto our fileserver for a few years now. Then I can play songs from across the collection using MPD without having to dig the original CDs out of the closet. I just picked up a large external hard drive and thought it might be time to take a look at ripping our DVD collection as well. There is an excellent Quick-n-Dirty Guide that goes into more detail on all of this, but here's an executive summary. Make sure your kernel understands the UDF file system: $ grep CONFIG_UDF_FS /usr/src/linux/.config


If your kernel was compiled with CONFIG_IKCONFIG_PROC enabled, you could use

$zcat /proc/config.gz | grep CONFIG_UDF_FS  instead, to make sure you're checking the configuration of the currently running kernel. If the udf driver was compiled as a module, make sure it's loaded. $ sudo modprobe udf


$sudo mount /dev/dvd /mnt/dvd  Now you're ready to rip. You've got two options: you can copy the VOBs over directly, or rip the DVD into an alternative container format such as Matroska. Vobcopy Mirror the disc with vobcopy (media-video/vobcopy on Gentoo): $ vobcopy -m -t "Awesome_Movie" -v -i /mnt/dvd -o ~/movies/


Play with Mplayer (media-video/mplayer on Gentoo):

$mplayer -nosub -fs -dvd-device ~/movies/Awesome_Movie dvd://1  where -nosub and -fs are optional. Matroska Remux the disc (without reencoding) with mkvmerge (from MKVToolNix, media-video/mkvtoolnix on Gentoo): $ mkvmerge -o ~/movies/Awesome_Movie.mkv /mnt/dvd/VIDEO_TS/VTS_01_1.VOB
(Processing the following files as well: "VTS_01_2.VOB", "VTS_01_3.VOB", "VTS_01_4.VOB", "VTS_01_5.VOB")


Then you can do all the usual tricks. Here's an example of extracting a slice of the Matroska file as silent video in an AVI container with mencoder (from Mplayer, media-video/mplayer on Gentoo):

$mencoder -ss 00:29:20.3 -endpos 00:00:21.6 Awesome_Movie.mkv -nosound -of avi -ovc copy -o silent-clip.avi  Here's an example of extracting a slice of the Matroska file as audio in an AC3 container: $ mencoder -ss 51.1 -endpos 160.9 Awesome_Movie.mkv -of rawaudio -ovc copy -oac copy -o audio-clip.ac3


You can also take a look through the Gentoo wiki and this Ubuntu thread for more ideas.

Posted
Screen

Screen is a ncurses-based terminal multiplexer. There are tons of useful things you can do with it, and innumerable blog posts describing them. I have two common use cases:

• On my local host when I don't start X Windows, I login to a virtual terminal and run screen. Then I can easily open several windows (e.g. for Emacs, Mutt, irssi, …) without having to log in on another virtual terminal.
• On remote hosts when I'm doing anything serious, I start screen immediately aftering SSH-ing into the remote host. Then if my connection is dropped (or I need to disconnect while I take the train in to work), my remote work is waiting for me to pick up where I left off.

Treehouse X

Those are useful things, but they are well covered by others. A few days ago I though of a cute trick, for increasing security on my local host, which lead me to finally write up a screen post. I call it “treehouse X”. Here's the problem:

You don't like waiting for X to start up when a virtual terminal is sufficient for your task at hand, so you've set your box up without a graphical login manager. However, sometimes you do need a graphical interface (e.g. to use fancy characters via Xmodmap or the Compose key), so you fire up X with startx, and get on with your life. But wait! You have to leave the terminal to do something else (e.g. teach a class, eat dinner, sleep?). Being a security-concious bloke, you lock your screen with xlockmore (using your Fluxbox hotkeys). You leave to complete your task. While you're gone Mallory sneaks into your lab. You've locked your X server, so you think you're safe, but Mallory jumps to the virtual terminal from which you started X (using Ctrl-Alt-F1, or similar), and kills your startx process with Ctrl-c. Now Mallory can do evil things in your name, like adding export EDITOR=vim to your .bashrc.

So how do you protect yourself against this attack? Enter screen and treehouse X. If you run startx from within a screen session, you can jump back to the virtual terminal yourself, detach from the sesion, and log out of the virtual terminal. This is equivalent to climing into your treehouse (X) and pulling up your rope ladder (startx) behind you, so that you are no longer vulnerable from the ground (the virtual terminal). For kicks, you can reattach to the screen session from an xterm, which leads to a fun chicken-and-egg picture:

Of course the whole situation makes sense when you realize that it's really:

\$ pstree 14542
screen───bash───startx───xinit─┬─X
└─fluxbox───xterm───bash───screen


where the first screen is the server and the second screen is the client.

Posted