Stitching layers between copper zones in KiCad

I’ve been working a bit with KiCad lately and have run into a problem in PCBnew with “stitching” (i.e., adding vias between) filled zones on top and bottom layers. This is something you typically do if you have flooded the unused spaces on both top and bottom of your board with copper and have connected the floods to ground or some other reference.

The KiCad FAQ outlines a process for doing this, and it works fine until you refill (i.e., re-pour) the zones — or the DRC refills them for you. When the zones are refilled, the vias you added for stitching become isolated from the zones and end up as little pads floating in space.

The problem and a workaround was discussed in a recent thread on the kicad-users mailing list. I wanted to summarize here the workaround in a slightly less terse way:

  1. Route the board and define your zones as you always have.
  2. Fill the zones as you always have.
  3. Select “Add tracks and vias” from the toolbar on the right.
  4. Click on an existing pad that’s connected to the zone’s net, drag the pointer a little bit to create a short track, then either (a) right-click and select “Place Via” or (b) type the ‘V’ shortcut.
  5. To add more stitching vias, continue to drag the pointer and type ‘V’ where you want to drop vias (or right-click and select “Place Via”).
  6. When you are done placing vias, hit the ‘End’ key on your keyboard (or right click and select “End Track”).

You can repeat this as many times as you want to create different clusters of stitches. When you refill zones, the vias will retain the connectivity information and work as expected.

Patents are public

I do a fair amount of open source code development, yet I am not (yet?) against the idea of software patents. I think there are significant problems with the way software patents are implemented in the US, but I don’t find the concept fundamentally odious.

However, I do find corporate bullying using patent infringement as a pretext quite reprehensible. A recent example–to which I will not link so that I don’t make things worse for the coder–really has me annoyed.

In essence, an independent coder reimplemented from scratch and for no commercial gain a program to  duplicate the functionality of a commercial product. The coder then published her/his work with the intent of releasing the code under an open source license. I am not a lawyer, but I think the general gist of a patent is that it is illegal to distribute a product that is under patent protection without first obtaining a license from the patent holder. As far as I know, it is not illegal in the USA to discuss patents in public nor to publish, for example, plans for making a better or different patent-protected Hovercraft Eel Sensor. It only becomes an issue if you try to distribute a product based on those ideas without a license. The ideas and discussion thereof are public. The use of ideas in products is protected.

The coder in our story has simply published plans (i.e., source code) for making a different version of a (possibly) patent-protected product. Nowhere does the coder indicate her/his intent to distribute a product (i.e., executable code) based on the plans. In spite of this, our coder gets a threatening email from the V.P. of the corporation that makes the commercial product citing (without specificity) that the published work infringes on their patents.

It may also be worth noting that the coder does not appear to have done any reverse engineering to discover how the software works. So if the V.P. actually means “trade secret” when he actually says, “patent,” then there’s nothing there either.

I am not a lawyer, but I utterly fail to see the grounds for the corporate entity’s gripe in this and similar cases. Also, in the particular case in question, you cannot help but notice the extreme vagueness in the communications from the corporate entity. Under these circumstances, one would easily be forgiven for thinking that the corporate entity knows it has nothing to stand on and that its strategy is simply to threaten a costly legal process against the coder. They know that  lone coders will almost certainly comply with threatening requests to spare themselves the burdens of the situation. I am not a lawyer, but in common-person speak we sometimes use the word “extortion” for this strategy.

You can’t have it both ways: patent protection and protection from public scrutiny. Patents are public.

Poor man’s code generator

A short time ago, I wanted to find a simple code generator that would run on Windows for helping me to maintain hand-rolled SPICE models files. I was a bit surprised to find none that were no-brainers to install or use. However, I did figure out that OpenOffice‘s mail merge feature can be used as a poor-man’s code generator. There’s more than one way to do it, but the process described at is the most straightforward for my brain. The key is to “print out” to several files or a single file–depending on what you are generating.

SPICE library management

The following post is all geekspeak. You have been warned.

I am trying to keep my SPICE modeling as platform and vendor-neutral as possible. To help with this, I have come up with the following structure for managing libraries. The idea is to have a file system, e.g.:

- models
    - diodes-inc
        - diodes
        - transistors
        - zeners
    - fairchild
        - transistors
    - onsemi
        - transistors

and in each category (i.e., subdirectory) store your individual model files. The file system, in addition to providing a structure to manage different kinds of parts from of different places, also helps to differentiate models for the same part from different sources.

Now here’s the fun part. So that I don’t have to have a million different .inc or .lib commands in the SPICE simulation’s source (one for each part I use, e.g.,

.lib C:\SpiceDev\models_raw\fairchild\transistors\MMBT2907A.lib

), I aggregate all the models in a given subdirectory into a single library file. Thus, C:\SpiceDev\models_raw\fairchild\transistors in addition to having several individual model files also has the file C:\SpiceDev\models_raw\fairchild\transistors\transistors.lib in it that is an aggregation of all the other files ending in .lib, .mod, or .sp3. So now if I want to use a Farchild transistors, I only need to include a single file:

.lib C:\SpiceDev\models_raw\fairchild\transistors\transistors.lib

Of course I don’t maintain the aggregate library file by hand. Instead I have written an AutoHotkey script that does the job. I place the script in a fixed place and then create links (i.e., shortcuts) to it from the directories containing the model files; but it will also work if you drop the script itself into the directory in which you want to make an aggregate library.

The script goes through each file in a directory (non-recursively) and if the file has a .lib, .mod, or .sp extension it appends its contents to a file named {directory-name}.lib . Both the extension of the output file and the list of aggregated input file extensions can be easily changed in the source code.

One important note: If you want to call the script using a shortcut, make sure the SetWorkingDir command in the code is commented out (as it is below) and also make sure the ‘Start In’ field for the shortcut is blank or points to the desired directory. Enjoy.

; AutoHotkey Version: 1.x
; Language:       English
; Platform:       WinXP
; Author:         Copyright (C) 2009 Mithat Konar
; License:        GNU/GPL2
; Script Function:
;	Copies contents of all files with extensions specified below into {directotyname}{outFileExtension}

#NoEnv  ; Recommended for performance and compatibility with future AutoHotkey releases.
SendMode Input  ; Recommended for new scripts due to its superior speed and reliability.
;SetWorkingDir %A_ScriptDir%  ; Ensures a consistent starting directory.

; To use with a shortcut, make sure SetWorkingDir command above is commented out and
; make sure the 'Start In' field for the shortcut is blank or points to the desired dir.

; user-set constants
outFileExtension=.lib	; the extension for the output (with dot!)
inExtList=lib,mod,sp3	; list of valid model file extensions (without dots!)

; "Main"
SplitPath, A_WorkingDir , outFile	; get the name of the active directoty. 

; use %outFile% as working file, then move to %outFile%%outFileExtension%
FileDelete, %outFile%					; just in case, we delete any old version now
FileDelete, %outFile%%outFileExtension%	; just in case, we delete any old version now

FileAppend, *======================================================================*`n, %outFile%
FileAppend, * Generated by %A_ScriptName% on %A_NowUTC% UTC from`n, %outFile%
FileAppend, * %A_WorkingDir%`n, %outFile%
FileAppend, *======================================================================*`n`n, %outFile%
Loop, *.*
	if A_LoopFileExt in %inExtList%
		FileAppend, *** File: %A_LoopFileFullPath% ***`n, %outFile%
		Loop, Read, %A_LoopFileFullPath% , %outFile%
			FileAppend, %A_LoopReadLine%`n
		FileAppend, `n`n, %outFile%
FileMove, %outFile%, %outFile%%outFileExtension%, 1
FileDelete, %outFile%

LTspice and tube models

Linear Technology’s LTspice is becoming quite popular among audio circuit designers, both professional and amateur. There is a lot to recommend it, but there is at least one issue that is crucial to be aware of if you are planning to use vacuum tube or other third-party models based on arbitrary behavioral voltage or current sources. And that is: LTspice’s implementation of arbitrary behavioral voltage or current sources is not completely SPICE 3 compatible.

In particular (from the LTspice help files),

LTspice uses the caret character, ^, for Boolean XOR and “**” for exponentiation. … This means that when you import a 3rd party model that was targeted at a 3rd party simulator, you may need to translate the syntax such as x^y to x**y or even pwr(x,y).

I ran into exactly this issue when experimenting with SPICE 3 versions of my own tube models.

Home Cloud

If the recent rash of netbooks is any indication, cloud computing may actually be gaining traction.

The aspect of cloud computing that’s the most attractive for me is being able to access all your stuff no matter where you are–provided you have a computer with a decent Internet connection and a fairly standard browser. However, there are two very bothersome aspects of cloud computing. First, if you cannot connect to the provider of your cloud service (e.g., your ISP is flaky, the service’s servers are ill, the site has been banned in the country you are in, etc.), you are screwed. Second, no matter what guarantees the provider gives you, your stuff is in someone else’s hands–meaning the provider can legally sniff your stuff for more effective marketing (Google) or it may be illegally hacked into.

However, there is a fairly easy approach to ameliorating both these problems, especially now that capable server hardware has become so profoundly cheap. The idea is simple: instead of having Google, Google Apps, Zoho or whomever host your Cloud apps, host them in your own home on a dedicated computer. As long as you don’t plan to open your Home Cloud to tons of users, the performance demands on the hardware will be pretty small.

When you host your Cloud apps from home, if your ISP goes nuts you will still be able to access your stuff from within your home LAN. While this won’t help you if you need to access your stuff from Starbucks, it is better than not being able to access it from anywhere. Also, when you host your Cloud apps from home, your data stays at home. It still may be open to hacking, but it won’t be available for other purposes. In addition, a would-be hacker would have to specifically target your server, whereas in a hosted situation one breach of the server may make all users’ data available to the hacker.

One downside to the Home Cloud concept is that it places the burden of backing up data on the home user. But this can be greatly simplified by appropriate Home Cloud software.

A bigger problem with the Home Cloud is what all the cool people are now calling “monetization”. In other words, how do you make money off it? End users are becoming increasingly accustomed to getting services for free. Google makes money feeding you ads. Zoho makes money by selling premium services mostly to businesses. Are users willing to pay for Home Cloud software? One possible way forward is to adopt the media server model: dedicated server hardware that’s preloaded with everything needed to make it go and that requires a minimum of user configuration. We may be living in a time where it may actually be easier to sell hardware that encapsulates a task than software.

I’m aware of only a few projects that have a Home Cloud spirit. eyeOS and Lucid Desktop are OSS home-hostable apps that give the user a virtual Web-based desktop. Another project to keep an eye on is Tiny Tiny RSS–essentially a home hostable replacement for Google Reader. All three of these projects are open source software, and it will be interesting to see where all three of these projects go.

A rare moment of accountability

On my way to the office today, a white van barreling down a road that joined mine at a T junction nearly removed my car, and possibly me, from service. Fortunately, disaster was averted by some heavy breaking and staccato tire squealing on my part (no ABS on the 1997 Fiat-TOFAS Tipo).

This in itself is not news. Near-misses in traffic are a million a day here. What made this event special was that the driver pulled over at a suitable spot some 300 meters on, rolled down his window, and poking both arms and his head out of the window gestured toward me for forgiveness.

And it all happened so quickly that I didn’t have a chance to give him a warm huggie to let him know that while I was annoyed at his carelessness I still appreciated his accountability.

P.S. For those of you who thought this entry might have something to do with audio, I have no reason to think that the white van had anything to do with speaker sales.

Le Mepris

That’s French for “Contempt”—as in the 1963 film by Jean-Luc Godard.

I am teaching a film course this term and decided to try to work “Le Mepris”. I just re-viewed the film before my lecture tomorrow. I meant to watch just the first few minutes but ended up watching the whole thing.

The film is so awesomely awesome on so many levels that it always messes me up. It’s revved by brain and my gut into hyper mode. I really should learn to avoid it. And Godard really ought to have made more like it.

What does this have to do with audio? Almost nothing—except maybe that the music for the film is, like much of the rest of the film, awesomely awesome and does a somewhat subtle though effective job of messing with the semiology of conventions.

Ripping thoughts

I got myself a nice new CD today (American Saxophone Music, Alex Mitchell), and this provided me a bit of an excuse to revisit audio ripping on Windows.

Picking codecs

With the wdespread availability of of high-quality lossy and lossless codecs, it makes no sense to rip to WAV files. I use two codecs as my defaults: FLAC for lossless compression and Windows Media for lossy compression. FLAC works about as well as any other lossless codec (apart from things like the generally unsupported Meridian Lossless Packing), and I love the fact that it’s an open source project. In fact, I am a big fan of open source in general. I routinely use open source software when it does what I need it to do–even if it may not be as smooth or efficient as a commercial or other closed alternative. I have even contributed a few open source programs, and not just for use in education. So FLAC floats my boat.

Then why, you are asking, do you use Windows Media as your default lossy codec? Especially when there are things like Ogg Vorbis around? Simple: at lower encoding rates, I find the damage done by WMA less distracting than the damage done by Ogg or MP3 at similar rates. This is a very subjective issue and one that is not the case (for me) for all music, all the time. But in general, WMA works for me. The fact that it’s supported on just about every DAP, media player, etc. also helps. (In all honesty, it’s been a few years since I have done any careful codec comparisons. It would probably be a good idea to re-do some tests to see if Ogg or MP3 encoding have improved.) I don’t really contemplate using aacPlus or other codecs. While aacPlus can do some amazing things at really low bitrates, it just doesn’t have enough support in hardware and software players to make it a viable alternative for general ripping. (However, I do use aacPlus as my default codec for remote steaming of my music collection, but that’s another story.)

Of course this assumes you are using a newer version of the WMA encoder. To do WMA encoding, I use the gratis command-line based Windows Media Encoder 9 Series tools. Many CD rippers incorporate older WMA tools, and these older tools limit the maximum bitrate, don’t give you options for constant/average bitrate, and probably don’t use the latest encoding models. (Microsoft claims, “[WMA 9’s] sound quality is 20 percent better than audio sampled with Windows Media Audio 8 at equivalent data rate.”) I should also mention that I won’t touch Windows Media Player with a ten foot pole. Not only do I find its interface maddening, it also takes control of your media collection by default, changing tags and cover art at will. Sure, you can turn this off, but the whole package just rubs me the wrong way.

Tagging tracks

Apart from encoding the audio data into your desired format, a CD ripper should also let you automatically add tags to ripped tracks. While there are a ton of standalone programs that let you do this, it’s always much better to take care of this from the start. There are two traditional sources of track information used by ripping software: Gracenote and You can read about the history of the relationship between the two elsewhere, but in summary the technology and database that makes Gracenote got started as an open-source project. Over time, the technolgy was converted to a proprietary, for-profit deal, and that got some people riled up. Thus was born. The quality of data available from Gracenote tends to be better and more comprehensive, but I still prefer because it’s free and for the people and it provides some competition for Gracenote.

Ripper options

So, what this means for me is that my ideal CD ripper will

  • Rip FLAC out-of-the-box
  • Rip fully optioned Windows Media 9 (or better) out-of-the-box
  • Allow the use of external command-line tools for encoding if either of the above are lacking
  • Use for tagging data
  • Be open source

Sadly, such a ripper does not exist. So here is what I am using anyway.

Winamp (v5.52)

Winamp is a great-grand-daddy of media players. As it is asked to do more and more on top of an ageing codebase it’s getting pretty bloated. However, you can remove the bloating additions if you really want. It is my default audio player because it does ASIO via a thrid-party plugin and it has decent media library abilities. And it will play just about anything you can throw at it. The things I don’t like about Winamp are the previously stated bloat, a UI that is a bit maddening and not really attractive (but usable nonetheless), and it’s not open source.

Winamp also does CD ripping. It encodes to FLAC and Windows Media 9.2 out-of-the-box; however the freeware version of the program limits ripping speeds to 8x. It also uses Gracenotes for track data. While there is no facility for linking to external encoders (except perhaps through a third-party plugin) Winamp is professionally maintained–which means frequent updates of the endocers.

Summary: A reliable, configurable workhorse that is undermined by slow encoding. Works well, but not currently my first choice.

CDex (v1.70 beta 2, zip archive version)

I get the feeling that this open source project is floundering a bit. It’s popular alright, but its develpoment seems intermittent. It’s got all the basic features that you want in a ripper, and the interface is ok. The latest beta version does FLAC out-of-the-box, and it also does WMA–but the bitrate is limited to 160bps and you’re given no options other than bitrate, which makes me think they are using an older encoder. It does let you use external encoding tools, but I could not get WMA9 to work. It uses for tagging data.

Summary: Good for ripping FLAC. Ripping with external encoders is available but unreliable. My current choice for FLAC.

BonkEnc (v1.0.6)

This is another open source offering. Like CDex, this project seems a bit stalled, but the (one) developer makes it clear that devleoping this program isn’t a top priority in his life. It does FLAC out-of-the-box, but it supports neither WMA nor external codecs. However, the developer promises a plugin system in a future release that may address both issues. It uses for track data. The interface uses a toolkit I am not too familiar with, and while it’s aesthetically not unappealing, usablilty suffers a bit because the buttons, etc. are so tiny. A promising start that I would love to see mature some more.

Summary: Good for ripping FLAC. Ripping with external encoders is unavailable. Worth keeping an eye on.

Audiograbber (v1.83)

Audiograbber may be the most interesting of the bunch. This is an old shareware title that went freeware in 2004. It does not do FLAC out-of-the-box, and while it has WMA built-in, it is only version 8. However, it’s ability to use external encoders works–at least with Windows Media Encoder 9. (You can download a pdf of the enhanced batch file I am using that does tagging automatically here). It also works with FLAC. It uses for track data. The interface takes a bit of getting used to, but once you adjust to it, it’s quite usable. But if I can get external WMA encoding to work with CDex, I will probably retire this title. I really wish the owner of the code would open it up so that it could be updated because this has proven to be quite a solid workhorse.

Summary: Good for ripping WMA and FLAC via extenal encoder. Internal encoders are mostly obsolete, but support for external encoders is good. My current choice for WMA.


There are a couple possibilites that I have not yet thoroughly explored. First is foobar2000, a gratis audio media player that like Winamp supports ripping and has ASIO support built in. I’ve sometimes considered using foobar2000 instead of Winamp, but the thing that gives it power (i.e., tons of flexibility and configurability) is also its Achilles heel. Doing anything in foobar2000 is an exercise in the non-intuitive. And Winamp’s media library just works much better. I gotta say, using other media players really makes you appreciate how well Winamp’s media library database manager works. It updates very quickly, without the user’s intervention, and doesn’t get in your way when it’s doing so. All other similar things I have tried really blow in comparison. But I digress…

Finally, while this is probably totally in my head, I think Winamp–even in ASIO mode–sounds more correct than foobar2000. I really didn’t want to mention this until I had a chance to do bit-for-bit output comparisons and other testing, but I mentioned it anyway.

Still foobar2000 has a couple things going for it that make it hard to dismiss it altogether. One of those is that foobar2000’s support for external encoders appears quite good. Another is that the filetype icons are way cooler than Winamp’s.

The second option is Exact Audio Copy, a gratis ripper that claims to do a better job of reading your bits without error than anything else out there. They make a huge deal about this, but I have yet to find that any of the other ripping solutions mentioned here suffer from bit-reading issues of any kind. Maybe if you’ve got a really crappy drive or a marginal disk… Anyhoo, EAC is a bit of a bear to set up, but it seems to be active, so it’s also worth a closer look.

I’ll report back if I have any world-altering experiences with either foobar2000 or EAC.

Getting covered

Of course, once you’ve ripped your CD, you’ll want to put some cover art into the CD’s folder. There are all sorts of software titles to help you with this task, but I just Google around until I find what I’m looking for. In my opinion, anyone who says this isn’t fair use has a really warped view of the concept.

||: salty chocolate :||

A gob of years ago, I was on a school bus on my way to another day of 3rd grade. I was thinking about some song or another that I really liked–“Uncle Albert/Admiral Halsey” by Paul and Linda McCartney possibly–and I wondered what kind of music I would be reacting to in the same way that people of my parents’ generation reacted so negatively against but that I loved so much. I took it for granted that in spite of my best efforts to stay “with it,” the generation gap would slowly slide so that I became the old fogey, complained that all this new music sounds the same, and would you please turn it down.

That day didn’t happen for many, many years.

That was then.

I like a lot of David Bowie’s repertoire, but I don’t think he is/was a musical genius. In spite of this, there is at least one spot in Nicholas Roeg’s “The Man Who Fell to Earth” where someones genius was showing. The spot I have in mind starts with Thomas Newton (Bowie’s character) nuzzled up against a spheroid device listening to some really inane music. We are lead to think that he’s listening to the very linear, scalar, repetitive, oscillator-driven stuff because it reminds him of home–at least until he removes the sphere from the device (stopping the music) and says something like, “I hate this shit that Farnsworth sends me.” It’s genius because of the way it plays on a number of the viewer’s expectations–both about Newton’s character and about what we are “supposed to” think space music/music-of-the-future sounds like.

It’s also genius because that very linear, scalar, repetitive, oscillator-driven, inane shit is what I am now hearing in a lot of places. Case in point: the music in the supermarket I was in yesterday. God, I wanted to run screaming from the place by the time I was done with my shopping. The same moronic thing, over and over and over and over again for 10 or more minutes. In this case it was a disquieting hybrid of a Latin beat and some Germanic melody. It was like salty chocolate, rammed into your mouth over and over again for ten agonizing minutes. Not want.

DIY: PCB layout tools


Update [2012-01-05]: WinQcad hasn’t been updated since May of 2011, and email enquiries to its author (who has always been very responsive to me) are going unanswered. For these reasons, I can no longer recommend it. I really hope the situation is a temporary one.

Because of the situation with WinQcad, I decided a few months ago to adopt KiCad as my primary PCB design tool. My reasons for choosing KiCad include:

  1. It’s free and open source software and enormously popular–meaning that it’s not likely to go away or suffer from midstream licensing changes.
  2. It works well on Linux (my main OS for several years now) as well as Windows. It’s reported to work on OS X as well.

It’s main drawback (for me) is the lack of a really good autorouter. The built-in one isn’t that useful. Many people use the router, but (a) it hasn’t been updated in quite a while–leaving its future status in doubt and (b) unlike KiCad itself it’s not FOSS (though it is gratis). There is a project underway to build a standalone autorouter that is compatible with KiCad, and I hope this project succeeds.

I am also using gerbv a lot for inspecting Gerber files. gerbv is part of the gEDA initiative.

I am currently working on laying out a printed circuit board for a client that is pretty dense (the board, that is, not the client). I don’t make a living off PCB layout work, so it doesn’t make sense for me to buy a mega-thousand dollar package like those offered by Cadence, Mentor Graphics, Pulsonix and the like. However, the boards I do can be fairly complex and so require pretty decent tools to support the process.

Fortunately, there are a number of low cost and open source tools that can be used to obtain professional results. (Check here for a comprehensive list of cheap and not-so-cheap tools.) A bunch of years ago, I went through a round of evaluating them, and I eventually settled on a little-known package called WinQcad. More recently, I decided to use my current project as an opportunity to re-evaluate some of the latest offerings, and the conclusion that I reached is that still nothing beats WinQcad.

Like most PCB development tools, WinQcad has a pretty steep learning curve, but it makes the entire process of schematic capture through generating production files at least as simple as any of the other solutions. However, the real deal-clincher for WinQcad is that it has hands-down the best autorouter of the bunch. Most of the autorouters I have tried fail to route even fairly simple boards, and when they succeed, they often make some very “interesting” routing decisions. In contrast, I have yet to encounter a design that WinQcad’s autorouter couldn’t handle and do a good job of as well. I can’t begin to tell you how much time and stress WinQcad’s autorouter has saved me.

You can download a free pin-limited version for small projects or evaluation purposes, and a 1000 pin license costs only $300. Highly recommended.

While WinQcad has a large library of parts and land patterns, in many cases you may still need to develop your own. To help you with this, the IPC and PCB Libraries, Inc. offer a free program that contains full specifications for standard IPC-7351A land patterns. This program is part of a larger suite sold by PCB Libraries that has all sorts of other things to help you develop land patterns; but if you are patient and are working with standard packages, this freebie will serve you well. PCB Libraries makes another subset of the suite available for free as well to help you with non-standard land patterns.

Another tool you will need if you plan to have your boards professionally made is a Gerber file viewer. Never, ever send Gerber files off for manufacturing unless you have used a viewer to confirm that they are ok. I recommend ViewMate from Pentalogix and GC-Prevue from GraphiCode. The user interfaces of these programs can be obtuse, but they get the job done.



I have more than a little sympathy for DIY loudspeaker builders, because that is how I got my start. I think I was about 10 when I built my first speaker with some scrap plywood and a 5-1/4″ driver that my brother gave me. How time flies…

The hobby itself can be quite seductive, especially if you have some woodworking skill. It seems like it’s a pretty straight-forward matter to build a box, buy some drivers, mount them along with a crossover or something, and then have a system that looks great and costs less than the commercial equivalent. However, once you start you will realize that good speaker design is actually a very complex affair. It only seems easy because of the relatively few parts that are involved. But I will let you discover all that for yourself–it’s part of the fun. If you do decide to go down the DIY path, you will do yourself a favor to have modest expectations at the start, and be prepared to get sucked into something that can take over your life if you let it.

Here are a few resources I recommend for the DIYer. Most of these will be known to experienced builders, but if you are new to the area you may not know all of them.

Drivers and other parts

Madisound Speaker Components is one of the best sources of drivers and other parts (capacitors, inductors, etc.) for both the DIYer and for small manufacturers. They have consistently been my first choice for supplying Biro’s own needs. Their service is top-notch, personal, and has never let me down.

Parts Express is another good source of drivers and parts. Professional service and good parts selection. It may be a small thing, but their selection of cabinet ports is quite good.

I have experience with Solen Electronique and MCM as well and cannot complain about either.


If you don’t like the idea of making lots of sawdust, Parts Express’ finished cabinets are really hard to beat. I have used these for some prototype projects, and the workmanship and finishes that I have seen have been really good. Madisound has a similar line of cabinets. While I expect that they are of similar quality, I have not actually seen any of these so I cannot say so with any certainty. (Madisound also sells a line of cabinets made by Woodstyle in California that some people really like, but the link shows nothing at the moment. I don’t know if that means they have been discontinued or if it’s just a bug in the website.)

Design information

If you are looking for information to help you decide what driver to use in your next project, Zapf Audio has a wealth of information to help you. John “Zaph” Krutke’s attention to detail in his measurement and methodology is admirable, as is his enthusiasm for sharing his findings. In addition to driver measurements, John has published a number of complete designs at his site. I have not heard any of them; however, they appear to have been thoughtfully and competently designed. While I don’t agree with the prioritization of all his evaluation criteria (perhaps the subject of a later post), I do have a ton of respect for his opinions, and they have enhanced my outlook. I have never met or exchanged email with John; in spite of this I feel comfortable saying that he is one of the few voices that are really worth listening to in the DIY speaker hobby realm.

Design software

You will need design software. And not just for cabinet design. You can’t really build a good system without software to help you with measurement and crossover design. While everyone seems to have their favorite in this area, I think LspCAD from IJData is just fine.


If you want good results, you must measure the performance of the drivers you have chosen in the cabinets that you will use them in. Because of the varying diffraction effects from different cabinets, you cannot use the measurements from some other source in your design. And you should never trust manufacturer data. Sometimes published curves are from preproduction prototypes, sometimes they are outright lies, and very rarely is enough information given about test conditions to let you extract useful information.

To measure your drivers you will need a microphone with a very flat and/or calibrated frequency response. Some DIYers build their own using Panasonic omnidirectional electret elements–some of which are incredibly flat. The only problem with this approach is that while the flattest of the Panasonic omni elements are quite flat, when all is said and done you may still be left with as much as 3dB error in the audio band. In my opinion, that’s not good enough. A few people make and sell complete mics using these elements and provide you with calibration data as well. This is the approach I recommend. The ones made by Kim Girardin at Wadenhome Sound are very good and very affordable. I’ve known Kim for several years as a result of our association with the Upper Midwest Chapter of the AES. He has real enthusiasm for the field and is one of the nicest people you are likely to encounter in the audio world. You may be able to plug one of Kim’s mics directly into the Mic input of your computer soundcard, but for the best results you will want a preamp with a controlled polarizing voltage. If this is the case, the Mitey-Mic II (or MM2) is a classic.

Most mics based on the Panasonic elements have simple two-conductor outputs and are meant to be interfaced to soundcard “mic” inputs or something like the Mitey-Mic II. If you want an instrumentation mic with a more conventional balanced output (and using +48V phantom power), the Behringer ECM8000 looks interesting. I have never seen one of these in real life, so it may actually be utterly poopy. But the specs and price look decent.

Laptop issues

While the soundcards in desktop PCs are usually good enough for making usable speaker measurements, laptop soundcards tend to, er, suck. I have been using Behringer’s tiny and cheap UCA202 USB soundcard with my laptop when I need to make measurements with it, and the results have been just fine. It uses decent 16-bit TI/Burr-Brown converters, and the headphone output is particularly useful for making impedance measurements and is the main reason I use it rather than similar USB devices. (Look here for some test results.) Don’t mess around with the packaged ASIO driver–just plug it into your WinXP machine and let it use WinXP’s built-in USB audio drivers. If you absolutely, positively have to have ASIO with this guy, I recommend ASIO4ALL.

In closing

I regret that I don’t have time to respond directly to DIY questions. If I don’t reply to your inquiries or address your question here, please don’t be offended. With this post I really just wanted to offer what little support to hobbyists that I can. And as time allows, I will try to post other tips and suggestions.

Hi-Fi reactionism and RoHS


One of the things that studying the hi-fi world from a design perspective (for the last three years) rather than from an engineering perspective (for fifteen years before that) has shown me is that hi-fi is an incredibly conservative field.

In all likelihood this has its roots in the solid-state disasters of the 1970’s, and it certainly wasn’t helped by the CD disasters of the 1980’s. Today, we are in a position where there is a loud and influential group of audiophiles who seem to feel that anything “old” (directly heated triodes, point-to-point wiring, horn-loaded fullrange loudspeakers, etc.) is good, and anything “new” (digital crossovers, solid-state anything) is “bad”. Well, the bad news for them is that there is another “new” being introduced into the industry, but this “new” doesn’t have its roots in technological advancement. Rather it has its roots in ecological concerns and public health.

I am talking about the RoHS (Restriction of Hazardous Substances) directive that the EU adopted in 2003. In short, it “restricts the use of six hazardous materials in the manufacture of various types of electronic and electrical equipment”[1]. The result of this directive is that it is becoming increasingly impossible to build equipment anywhere that uses parts and/or solder containing lead. Because the EU is such a big market, and because manufacturers usually don’t want to set up different manufacturing processes for different markets, the use of lead is being phased out everywhere. Parts manufacturers are phasing out the use of lead tinning in their parts, and this means that assemblers need to use lead-free processes as well.

So, what I see happening soon is talk in the hi-fi industry of “leaded” vs. “unleaded” products–with the idea that the “leaded” form of a product being more desirable. Of course, this idea will receive resistance from the silver-solder crowd, and it will be interesting to see how that plays out as well. Maybe this particular form of insanity has already started. If you know of any examples, I would love it if you’d let me know.

[1] Restriction of Hazardous Substances Directive. Wikipedia, the free encyclopedia. Accessed 13 May 2007.

On (not) making loudspeakers and the future…

open door policy

A summary of the current state of affairs from Biro Technology’s founder.

At the present time, Biro products are not available for sale in the USA. However, our custom services are still available to those in Istanbul, Turkey or to anyone willing to deal with shipments from Istanbul. I sincerely hope this is a temporary situation. If you are interested in how this situation came to be, I encourage you to read on.

For many years, there has been a close professional association between Biro Technology and Audio by Van Alstine, Inc. AVA was the first and eventually became the only dealer of Biro products, and in between the demands of running Biro, I would often provide technical and editorial services to AVA. That relationship grew even closer in 2001 when I decided to move to Istanbul, Turkey to advance my academic career. To keep Biro products available for sale in the USA, AVA agreed to manufacture and sell Biro products under an exclusive license. Around the same time, I took over most of the engineering duties at AVA on a consulting basis as well. Biro and AVA started to resemble two beans in a pod. Life was good.

I was therefore a bit surprised to read in the premier issue of Inside AVA that the recent lapse in Biro product availability was due to the unavailability of a critical L/1 system component. In actuality, while reliable supply of a critical L/1 component did indeed dry up last year—forcing us to retire production of the L/1—I had engineered a new design to replace the L/1. However, this new design was never put into production because of AVA’s increasing need to focus their available labor resources exclusively on their own line of electronics. Thus, the lapse in Biro product availability is actually due to labor pressures at AVA, not because of a lack of manufacturable Biro designs. For those who enjoy this kind of thing, the gory details follow.

In the summer of 2005, having recognized the need to engineer a solution to the obsolescence of a critical high-frequency subsystem component, I finally succeeded in producing a prototype system to replace the L/1 that was superior overall to the original, but at the same time was much easier to manufacture. This was preceded by a couple non-starters and involved an incredible amount of reanalysis and reevaluation of some subtleties in design goals. The process was aided by significant advancements in instrumentation since the original L/1 was designed. It was long, hard, sleepless work, but the result was worth it.

When I went to meet with AVA to play them the final prototype and to go over the manufacturing process, I was informed that current labor pressures at AVA, which we briefy had discussed before, meant that something had to give. Regrettably, the most logical choice was for AVA to stop manufacturing loudspeakers. Given this state of affairs and the overall level of exhaustion all around, none of us saw much point in auditioning the new system, and as a result nobody at AVA ever heard the new system, a design of which I remain very proud.

To AVA’s credit, it must be said that it takes a special kind of person to want to make speaker systems. On the surface it seems to be a relatively simple process, but in fact it places heavy burdens on a manufacturer, particularly with respect to material handling (cabinets are heavy) and storage (cabinets are large). Given AVA’s increasing need to support the labor on their own line electronics, coupled with the difficulty in finding adequately qualified labor in general, they simply couldn’t carry the burden of making speakers anymore.

The above notification took place mere days before I was due back in Istanbul to start a new semester. Sure, I would have appreciated some advanced notice of the situation, but, you know, la merde se produit. The situation left me with a couple choices: return to manufacturing systems myself or find someone else to manufacture them. The latter was not really possible given the fact that I spend all but about 4 weeks a year in Istanbul. Implementing the former, while conceptually possible, was not do-able in the available two-day time frame. That left me with no choice but to suspend product availability in the USA until a solution to the manufacturing problem could be found.

Ultimately, for our clients it doesn’t really matter why there is a lapse in Biro availability. To you the only thing that really matters is that we are not there for you, and for this I am quite sorry. Trust that in between my fulltime teaching load, thesis writing, other projects and research, and my Biro activities here in Istanbul, I will be trying to find a solution to the manufacturing problem in the USA. And if you have any ideas, I would love to hear from you.

In the meantime if you would like to visit an old version of the main Biro Technology website, which has information on the most recently available Biro products, you can still do so.

Warmest regards,
Mithat Konar