9 February 2015

Making your headphones sound more natural and better

Inner fidelity is a great site and occasionally they publish some articles that are really interesting. Today they published an article about frequency response curves again. Go read it because its a wonderful summary of the design targets for headphones and the challenges of recording sound with a microphone and having it sound right through our head and ears. Instead of buying a new pair of headphones however I decided to see what I could do with my HD598s based on the information presented.

My sound card is a Soundblaster ZX and the drivers provide an EQ section. These changes are applied globally to all sound. Like most EQ I can't give it a detailed curve but it is at least a place to start with modifying the sound.


Inner fidelity provides the following human response target curve which is research in progress trying to build a better target curve.to make headphones sound like a pair of decent speakers in a room. The black line is the ideal curve for headphones whereas the green line is for speakers to make sound flat.


Given the target we need to know what the raw response of our headphones are. Thankfully my Sennheiser HD 598s have been reviewed by Inner Fidelity and they have the following curves. The grey curves are what we want as they are the raw data and not the adjusted curve.


Give all this information what can we do to improve the sound from the headphones to make them sound more natural? We can take the target response curve and the raw curve of the headphones and work out what the EQ is to drive the headphones closer to the Human target response curve.

We start by capturing the volumes in dB at the EQ  points of the Human target response curve for headphones. For example at 31 at the value is +4dB, at 16k its -10dB.  Then we do the same thing for the average grey line of the HD 598', so at 31 its -33dB, at 16k its -40dB. The problem with the headphone values is they are much quieter, and in this case the middle point is -30dB, so we need to add 30dB to zero the curve out.

We now have two curves, one being what the headphones actually produce and the other what the target is, all of which compensate for sound recorded on a microphone and the average human body response. To work out what the EQ changes necessary to bias our headphones to the HTR we need to take the HTR value and take away the headphones value. For example at 31hz its +4dB for the HTR and -3 measured for the headphones so it ends up +7dB. Do this for all the frequencies and you get a set of frequency changes.

EQ Frequency HD 598 Raw HD 598 + 30dB HTR Target EQ Change
31 -33 -3 4 7
62 -28 2 4 2
125 -28 2 1 -1
250 -30 0 1 1
500 -30 0 2 2
1000 -31 -1 4 5
2000 -30 0 9 9
4000 -20 10 11 1
8000 -30 0 4 4
16000 -40 -10 -10 0

So how does it sound? It sounds great. Its amazing what these change can do to improve the sound of a pair of the HD598s. It doesn't sound like a set of speakers in the room but its a lot closer than the flat EQ at achieving that. What is most interesting is this has improved positioning information for SBX pro in games and that is worth the effort. So while you can't use my settings you can generate your own with your headphones raw curve.

6 February 2015

Amazon has fixed its Prime Video 144hz bug

Last year I reported a bug with 120hz and 144hz playback of video. I am pleased to see its now been fixed. They didn't tell me when it happened I just tried it again and it worked. Nice to see they are fixing the bugs, not so good to see those impacted aren't receiving a notification when its done.


22 January 2015

The exponential cost of testing

As any software project continues it gains features. Very few projects shrink over time, its usually an ever increasing list of features and an ever growing codebase. Most software projects last years and many companies that invest in software are doing so for the long term. There are some exceptions, sometimes companies are creating it to throw it away but most of the time companies are only intending to build a system once and to do so with the intention of running that service for a long time. There is an unfortunate growth in testing effort that comes along with features that you simply can't escape however.

We have to assume that our overall goal of testing a product before release is to determine it has no defects. We are always going to miss cases but the purpose is to ensure the quality of the software product is as good as it can be. When we add functionality the total amount of testing that the product now requires increases. We can make arguments for whether adding that functionality causes a linear growth in the total testing effort, whether its exponential because features interact with each other or sub linear because the features have an overlap. But I think its fair to say we all expect adding a feature to add additional testing cases, which results in the following 3 possibilities for the growth of total testing effort to validate a release in relation to features/time.




Given this progression of effort over time the solution proposed in the late 1990s was to automate most of the test cases. Humans aren't very good at running the same test over and over, they tend to skip scenarios and make mistakes and its incredibly boring and labour intensive to execute testing scripts against a piece of software. More importantly since the testing effort grows as the project continues a project only really has two choices. One is it keeps adding staff such that the testing team is big enough to test everything or it chooses to avoid testing some of the system. Every company I have seen has chosen the skip tests option, and indeed I don't think we see any big companies today that are now 99% testers having pursued the test everything approach. So the strategy of only testing some items usually falls down to testing only the latest features and maybe some existing ones that interact with those new feature. While a reasonable strategy for adhoc testing it is not a very rigorous way to ensure software quality and bugs in older features impacted by the new features will be missed. Indeed if a project team is refactoring and there are no automated tests to run its extremely likely that bugs will be introduced and not detected with this testing strategy, which is why so few code bases without automated tests refactor their code. It is far too expensive to refactor when a complete validation of the system will be required.

So then Test Driven Development came a long and we had the discipline within teams to ensure that this progression of testing effort was passed onto the machine. It took time to develop the tests but the end result was that the testing team didn't need to gain in size every release or risk missing parts of the system. It was without a doubt much cheaper to maintain the system, it was possible to refactor safely with confidence that the system wasn't now broken and hence the code didn't rot as quickly. The system could be released more frequently and with good confidence it worked and all the while the team size could remain constant and economic.

Yet today in multiple blogs this practice is vilified as slowing down developers and projects. Maybe these are the short projects that don't need to think about testing effort for the long term and don't need to worry about refactoring as they are throw away. Or perhaps they really do have a product that will be made and shipped just the once such as an embedded system. But it seems to me that many developers are talking themselves out of the basic problem, that they don't want to write tests and hence they are useless. But while they have all these other drawbacks and benefits is is not just about the developer, its about the teams effort as a whole and hitting a certain level of quality. This continuous growth in effort is a real problem. If you want to release a new version every month you either take the risk of releasing a bad release and leave the bug testing to your customers, or you have an enormous testing team or you have automated tests. If you take that down to weekly or even daily releases it becomes impossible to do it manually on all but the smallest of software projects and enormous testing teams. This relationship isn't going away any time soon, you always have to make that choice of risk of releasing a bug again for something that would have easily been caught had the feature been tested properly or automating the test to begin with. Its why I remain firmly in the belief that automated testing is critical for a projects future productivity and quality. Without it velocity has to slow in the future as bugs are fixed and release quality steadily drops either with old code rotting or new code breaking. TDD is about discipline to do this so that it always happens. Because its no surprise that developers don't like to write the tests once the code is done, its just like documentation in that regard and if its left to later then later becomes never.