Document Home

Valid HTML 4.01 Transitional

Nokia N810 and 770 Internet Tablets
Media Format Issues

James F. Carter <>, 2006-03-24, updated 2008-03-15

Audio: MP3 vs. Ogg Vorbis

Here's a discussion (dated 2001-12-16) of how to rip and play MP3 on Linux. I've used two GUI packages for ripping: kaudiocreator from SuSE 10.0's kdemultimedia3-CD package, and grip (package grip) which is GTK-based and more Gnome-friendly. Both performed well for me. Each one transcribes tracks off the CD using cdparanoia (or its library), which is not the fastest ripper but which is able to work around many media errors such as scratches. Then it compresses the resulting WAV file with the encoder of your choice. The size of a WAV file is 1.77e5 bytes/sec (it is deleted after encoding).

Perhaps I should qualify the performed well rating. My desktop machine has an ATAPI interface, and both grip and KAudioCreator did the job there with very little drama: not even any complaints in syslog.

On the other hand, my first ripping attempt was on my laptop, which has an Intel 82801FBM (ICH6M) SATA-capable disc controller and, therefore, SCSI access to the CD drive. I tried grip-3.2.0, but it got in a fight with the generic SCSI driver which complained that the count or reply length was not set. Unable to read the disc. Trying kaudiocreator from kdemultimedia3-CD-3.4.2 It has the same complaint but it does rip the audio. Some tracks, however, required major thrashing, probably on flaky sectors, but the program eventually read the data. Likely the laptop's CD drive (rather than the media) is to blame for thrashing.

I am having a lot of trouble to decide which compressed format to use. The contenders are MP3 and Ogg Vorbis.

Ogg Vorbis

The Ogg Vorbis encoder is in the public domain. It is available in my distro in the vorbis-tools package (/usr/bin/oggenc), or sources can be downloaded from Ogg Vorbis quality interpretations from the FAQ: 0 is about 64 kb/s, 3 is about 110 kb/s (and sounds better than MP3 at 128 kb/s, so they say), 5 is about 160 kb/s. At quality 3, which is the default, I got 12.5x compression or 1.42e4 byte/sec, matching the FAQ's bitrate.

An Ogg Vorbis codec has been written for the TI TMS320C55x DSP which is in the Nokia 770. See the thesis of Erik Montnémery and Johannes Sandvall at the Technical University of Lund (2004?) I had a hard time to come up with a working e-mail address for either of them, but I posted a message at Sanvall's student page and I hope he'll eventually see it. It's very unlikely that I'll be able to use this codec on my trip.

Update: About 2007-01-19, Jussi Kukkonen, Marko Nynaken and Tilman Vogel created an Ogg Vorbis DSP codec. This is the mogg package and its dependencies, one of which is the codec. I don't know if it's a reimplementation of the Montnémery and Sandvall codec. But it enables the media player to play Ogg Vorbis files natively, both local files, local M3U playlists, and playlists off a website. Thank you all!


The MP3 encoder is not in the public domain, and therefore is not available as part of Linux distros. Decoders are generally included in distros but the license terms are not free. According to, for PC-type software the decoder costs US$0.75 per unit (user installation) or US$50,000 for an unlimited number of units. The encoder costs US$2.50 per unit. There is a minimum payment of US$15,000 per year. They can provide Windows and MacOS object code at additional cost.

There is an open source implementation called BladeEnc by Tord Jansson, which is what I tried out; another good possibility is LAME (follow the using LAME link for downloading source). The legal status of MP3 encoders is discussed briefly at <mmilut>'s website in Slovenia. At the same site there is a RPM package for BladeEnc.

Running on a machine with an ATAPI interface for the CD, grip had no problems and was easy to use. Remember to do Configure - Rip and select BladeEnc as the encoder; the default is Ogg Vorbis for legal reasons. Grip (cdparanoia) ripped the disc at between 2x and 5x realtime, and BladeEnc on a 2.4 GHz Pentium-4 could encode at about 2x. At a 128 kb/sec quality setting it delivers 11x compression, or 1.60e4 byte/sec. 128 kb/sec is the standard setting in grip, but the author recommends a higher quality setting of 160 kb/sec (less compression).

Battery life test, playing music continuously. The 1300 mAh battery was used.

15:10 Started with full (?) battery
20:10 Down to one bar on the meter
20:27 Battery empty, machine shut off
5.3 hrs Total time
Ogg Vorbis (oggplay)
09:20 Started with full battery, charged overnight
12:50 Three bars on the meter
13:20 Two bars on the meter
13:55 One bar on the meter
14:30 Zero bars, test was stopped voluntarily
5.2 hrs Total time

Let's summarize the criteria for choosing between these formats:
Item MP3 Ogg Vorbis
Legal License required, $15,000/year Public domain
Compression 11x at 128 Kbit/sec 12.5x for default quality 3
Equiv byte/sec 1.60e4 1.42e4
Fits in 800 MB 13.9 hours, 11 discs 15.6 hours, 12 discs
Rip & Compress Around 2x realtime 3 to 5x realtime
Players Osso Audio Player in DSP Oggplay in CPU -or-
Mogg add-on to media player, in DSP
Annoyances Lawyers with subpoenas. Oggplay-0.30 can do multiple selection but not m3u playlist files. Dropouts when other tasks hog the CPU.
Battery Life 5.2 hours 5.3 hours (same)
Key Points MP3 runs in the DSP Ogg Vorbis is legal, and has higher quality combined with smaller files.

And the winner is: Ogg Vorbis, because it's better and is more legal. But I really wish I could get my hands on that DSP codec.

Music selection for the trip:

It took most of a day to rip these discs. I did three of them in both Ogg Vorbis and MP3, to test.

Journal Articles: PDF vs. HTML

My main interest here is Science Magazine. Requires a subscription and an account, included in the subscription. It turns out that each article is packaged in a separate PDF (smallest 53 kB, largest 5 MB, 2nd largest 1.2 MB), so it's not exactly a one-click download process, but is definitely feasible. Total size of this issue: 18 MB. Later I wrote a script using wget that automates the process, carefully staying within the bandwidth limit requested by their webmaster.

Reading the PDFs was not exactly an optimum experience. Here's a summary of what I tried:

I have a serious suspicion that the PostScript engine in both viewers uses floating point a lot, and that the OMAP-1710 processor lacks hardware floating point. (But some ARM9's have it.) This is the reason that the viewers take so long to render the PDF.

Either PDF viewer is a memory hog. Several times with a PDF active but iconified, I ran out of memory doing other things. So don't do that. A swap file on the memory card would be horribly slow but would let you survive the out of memory condition.

Here's a comparison of the speed of Osso PDF and Evince (all times in seconds):
Activity Osso evince
Load 1 page PDF (1517) 89 101
Zoom 1 step 100 1
Load 5 page PDF (1533) 110 142
Zoom 1 step 130 1
Second page 97 237

It's clear that a lot of the blame for slow rendering has also to go to the Science web designer and to whoever designed the PDF format. The largest PDF in the set turned out to have two illustrations, schematics of enzyme cycles in a cell, drawn in large areas of uniform color with symbols for the enzymes on top. In a run length encoded format such as GIF, or in JPEG, both together could scarcely occupy 50 Kb, but they bloated the PDF up to 5 Mb, each byte of which (after decompression) had to be massaged by the PostScript engine. The images obviously were provided by the author at a high pixel density, and they should have been decimated to something more reasonable. Can you decode JPEG in PostScript? Stuff that up your LaserWriter!

The HTML version is working out well -- I've read at least 15 issues and it's a lot more convenient than reading paper. I'm discovering in a lot of areas that a web browser can do a whole lot of page display stuff very well, and if you can use HTML you should use HTML, in preference to various other display methods.

E-Books: Plucker vs. HTML

I installed the Plucker viewer, originally for Palm, which is said to be very good. In the software catalog there is a link to download the Maemo version. This is for Maemo-1.1; Plucker is not (yet) available in Maemo-2.0 but FBReader is. It can read various file formats beyond its native format, including Plucker and HTML. My experience with FBReader was generally similar to Plucker.

I downloaded several e-books from Project Gutenberg; they have a large selection. They have several file formats for the more popular (or more completely processed) books: flat text, HTML, and Plucker (extension .pdb, mime-type application/x-plucker), but their filenames are just numbers, so when you download you need to give them more reasonable names.

Unfortunately, my initial experience with Plucker was frustrating; I couldn't get past the first chapter of any of the books.

To debug, I tried installing the Plucker package from SuSE Linux on my laptop. It would appear that this package contains the toolset which creates Plucker files, but I don't see the viewer. Plucker was originally created to compress HTML off the web, using the toolset on a PC, and then view it on a Palm. It's equally easy to create an e-book from local HTML files (except HTML tables are handled poorly), and the Plucker viewer on the ITB works perfectly on the output. Zlib compression of about 2:1 is achieved versus the original HTML. Here's a sample command line to create an e-book off the web; substitute file://localhost/ URLs for local files.

    plucker-build --author="James F. Carter" \
    --doc-name="Tiger in the River" --title="Tiger in the River" \
    --maxdepth=2 --url-pattern='*' \ > TigerRiver.pdb

This implies that the finger of blame points at the books I downloaded from Project Gutenberg, which must be incomplete or damaged. Instead I downloaded zip files containing HTML, and tried rebuilding the books from those. I had mixed results. One book took the form of a single long HTML file, but after it was pluckerized I could only read the first chapter, same as the pdb file I downloaded. The other book has many illustrations -- scans of the original pages in which the text runs around woodcuts -- and the total size of the HTML plus images is 17 Mbytes. The pdb file ended up at 9 Mbytes. I copied everything to the ITB, but the viewer choked when opening this file, running itself out of memory. The pdb file originally downloaded for this book clearly contained only the table of contents, and the intent clearly was to link to eight parts each in its own HTML file, but I couldn't find the corresponding pre-made Plucker files.

I find that these e-books work out much better if I just view the HTML with the ITB's web browser. On the other hand, my own stories were originally created as HTML (well, not really, but that story is off topic), but I find that they come out even better when compressed and viewed with Plucker. I think the key for making a good Plucker file is to have reasonable sized chapters -- mine are 44 Kb to 88 Kb -- rather than one or a few giant web pages. I haven't experimented, but the images will be reformatted to PNB, and I suspect that the size of the images in this format needs to be counted against the upper bound on chapter size.

Document Home