Friday, December 17, 2010

Different like a Square Tire

The dual-screen Samsung prototype that I slammed in this article has now launched on Verizon as the Samsung Continuum. A significant ad campaign launched with the handset but that does not appear to have helped sales. Actual sales numbers have not been released but Verizon reduced the price of the device by half two weeks after launch and hilariously had this image of the device on their home page for a brief period (credit: engadget).
Yes Verizon, we wish it looked like that too.
In a Freudian slip-of-the-mouse Verizon photo-shopped out the gap between the large screen and the small screen to make it look as if the device had only one large display. 

Most of the reviews followed the herd off the "different is always better!" cliff but eWeek found a way to be beneficially different and gave a fairly realistic rundown of the device. It looks to be pretty clear that the whole "dual screen" shtick is motivated by the business concept of differentiation, an ideal that leads to terrible products when the MBA forces the design team to make it happen. Samsung definitely tried too hard to be different and all indications seem to be that consumers "aren't buying it." I concede that it's possible that sales have not been a complete letdown to the OEM, so I will with-hold final judgement until I see some real numbers, but in the mean-time let me reiterate that consumers want big screens, and small phones, not gimmicks created to "differentiate" products without adding utility.

Thursday, December 16, 2010

Big vs. Small

In late winter 2009 a few friends and I decided to climb Mt. Rainier. After a long drive from Arizona we arrived at the lodge/trailhead in light snow around 4pm. We had a limited amount of time for the trip and as we sat in the car discussing our plans the snow got heavier and the wind began picking up. The low visibility and fresh snow were sure to slow us down and we did not want to get caught moving during the late afternoon due to avalanche danger so we began considering an early evening departure rather than the originally planned early morning departure. At around 6pm we finally made the decision to start in the evening, after finding an underground washroom area where we geared up and practiced crevasse rescue techniques we set out. It was quite windy when we hit the “trail” and after adjusting our gear to eliminate all exposed skin we tied on to a rope with 20 feet separating each person and set out through the snow.
I was thoroughly elated and felt like bursting with enthusiasm for the trip as we started off. After gaining only a few hundred vertical feet we entered the clouds and visibility dropped off significantly, around a thousand vertical feet later the wind started picking up, gusting to I’d guess around thirty knots.
Luther, Bryan and Chris from left to right
It was now about an hour past midnight and a full on blizzard, I was thankful to have just exchanged the lead with Chris, my quads were still burning from the effort of breaking trail as Chris began taking us straight up a sixty degree couloir. Visibility was no more than a dozen feet with wind howling well above thirty knots and Bryan who was now second in line behind Chris kept falling, since we were roped together his falls were pulling Chris down and Chris was yelling advice and frustration down to him, Bryan was yelling back but the wind whipped both voices away and I could not tell what either were saying or see what was happening. Every time Bryan fell it resulted in a fairly lengthy pause and I’d straighten my knees trying to relieve the lactic acid burn in my quads. We were ascending the couloir to the left of a massive cliff hoping it would break the wind a little, I was actually surprised that the powdery snow was able to stick to such a steep surface and about halfway up the metal teeth under my mountaineering boots broke loose, as I fell I quickly put the adz end of my ice-axe to my chest, falling on my stomach and forcing the point of my axe into the snow and ice, stopping my fall before the rope went taught and pulled Luther who was in front of me.
After one of Bryan’s falls as I stood there resting my quads, an interesting thing happened. As the shouts of the people just forty feet above me contested with the howling wind and the weak lights of headlamps stabbed about through the swirling darkness a tiny sound came clearly through the thundering background: tinkle tinkle tinkle, like small children playing the “triangle” at a Christmas rehearsal. I looked around and saw that right at the base of the cliff there was a sort of concave river of ice, the guys above me were knocking ice shards loose and because the couloir was so steep they passed right by my ear as they skated down the ice river. I experienced a moment of euphoria here and literally broke into a huge grin, this was it! this was a winter blizzard on a northern glaciated peak in all it’s glory, something I’d been looking for :D


Whenever I tell this story I mention that surreal moment as the tiny sound of ice falling down the cliff’s base reached my ears with startling clarity through the roar of the blizzard. For whatever reason there’s something about the small and beautiful in the midst of the huge and powerful that captures the human imagination. This same sense of awe filled me as I sang the Christmas carol “One Small Child” in church this morning.  How do you say “I love you” to someone who hates you? You have to put yourself in a place that the "beloved" can identify with and understand, and then you have to make a sacrifice for that person, a sacrifice that is big enough to render ulterior motives implausible. The lyric “One small hand reaching out to the starlight / One small Savior of Life” brings to its conclusion this expression of the small and beautiful somehow existing in the huge and powerful. Because I now have a nephew, whenever I imagine “one small hand” I see my nephew’s hand “reaching out to the starlight” and something about having a small child that I really love allows me to see the grand artistry of God’s plan to offer hope to mankind. He had to become a “tiny” person because that is what we are and what we understand, He had to sacrifice or the authenticity of His message would be easily questioned. 
And so the huge and powerful... became the small and beautiful, actions really do say things that words can't.

Wednesday, November 17, 2010

Bloomberg: "China Wins Order for 100 C919 Jets"

In what was no doubt a hard fought win, state owned aircraft maker Commercial Aircraft Corp of China (COMAC) has managed to sell 100 C919s to state owned and (creatively named) aviation conglomerate and aircraft lessor Aviation Industry Corp (AVIC). Bloomberg's Margaret Conley who I thought was a business correspondent but now appears to be an intern from Bloomberg's Entertainment division calls the deal a "Win" and describes it as "breaking Airbus SAS and Boeing Co.’s stranglehold on the world’s second-largest market for new aircraft."

I guess Margaret is new to the industry but when one company owned by the Chinese government "sells" a product to another company that is owned by the Chinese government it is neither a "win" nor breaking what I imagine is a very sinister "strangle-hold."

The C919, better known as the ARJ21, is a copy of the vintage 1960s DC-9 family of aircraft and is actually being built with the same tools that McDonald Douglas once used to build MD-90s (a derivative of the DC-9). McDonald Douglas, in a fit of terminal shortsightedness, actually moved production of the MD-90 to China for a brief period of time (which is how China got the tooling) before they were acquired by Boeing. It's taken the Chinese a couple decades but with help from numerous foreign aerospace companies they have now cobbled together an aircraft that has a fuselage designed by McDonald Douglas, wings designed by Russian aircraft maker Antonov, engines designed by General Electric, a Fly-by-wire system made by Honeywell and Avionics made by Rockwell Collins. Despite contributing slightly more than nothing to the design China claims that the jet was designed by Chinese with completely independent intellectual property rights... which fits well with the Chinese government's philosophy that if it walks like a duck, quacks like a duck and looks like a duck... then it is indubitably a chicken... Until tomorrow when the government develops a need for caviar, then it will be a sturgeon.

The disconnect between reality and press release in Chinese aerospace continues to be downright farcical but no one will point it out. Outside of companies owned by China Inc.The ARJ21/C919 has managed to sell a total of two aircraft, both to hitherto unknown Vietnamese carrier Lao Air. I have no doubt that China will eventually become a serious competitor in the commercial aircraft sector but until then why don't we all stop pretending that the C919 is anything but a flying Frankenstein with bits and pieces sewn together from dead foreign designs. And of all companies Bloomberg is the one publishing this trash?? Somebody call the editor.

Friday, November 12, 2010

Is God Evil?

I recently heard a statement that went like this: 
If God knows the past present and future, and hell is real, then God makes people knowing that they go to hell, thus God creates people for hell, thus God is evil.
This statement is supposed to be problematic for theists however it is only problematic if the people who were created by God and went to hell never had a true choice. It does seem that if God both created a being, and knew where its final destination would be then that being could not have had a choice about its destination just as a rock has no say in its destination when it is dropped off a cliff. Thus people mentally frame the issue like this: 
God, by creating a soul, sets off an unalterable chain of events that results in a person going to hell. 
It is tempting therefore to believe that God is evil, until one realizes that we are not talking about the contents of the empirical universe when we speak of God and souls and so we are not talking about an unalterable chain of events. Souls are not bound by the laws of nature and thus stand independent of natural causation, their decisions are not determined by physics and chemistry but by an agency that transcends natural law and truly does have free will as things that are governed by physics and chemistry never can. Because souls truly have the ability to choose, they can truly be held accountable for their actions, if God has pre-knowledge of the soul’s decision it does not mean that the soul did not have the free will to make that decision; thus God did not “make people for hell” and is not evil. 
The force of this statement which at first seems considerable is removed when the hidden assumption of naturalism is revealed, because assuming naturalism for a question involving God and souls is absurd and the statement is not problematic for theists if the souls in question are not constrained to natural causation. So again the force of this statement derives from people subconsciously assuming that the supernatural realm is subject to natural laws of causation, which of course, it is not. Therefore the force of the statement can only be as great as the degree to which the observer is confusing natural causation with supernatural agency.


All this talk of free will raises a question for the atheist: 
Do you really believe that you have free will?
He (this sentence would be awkward with gender neutrality) cannot logically answer yes If he has a basic understanding of the sciences here because the assumption of naturalism means that all his actions were ultimately not determined by his free will but by the initial conditions of the big bang and the random outcomes of quantum interactions, neither of which he has any control over. And if the atheist does not believe he has free will but that his thoughts are the result of something he can't control then how can he believe that his thoughts have any validity?

Monday, November 1, 2010

A New Metric for Smartphones

Everybody loves a great design, everybody loves designs that do what they're supposed to do, and do it well. Sometimes coming up with a great design is easy and sometimes it is hard. It's hard when there are conflicting design goals; for instance an airplane has to be light, but it also has to be strong. These goals conflict and finding the optimal balance of strength and weight is perhaps the primary problem an aerospace engineer (an "AE" as they were called in undergrad) faces. Similarly, smartphones also have a conflicting set of design imperatives. They must be small, but they also must display a lot of information and be great web browsers and media viewers. 

Being great browsers and media viewers requires a large screen, but having a large screen conflicts with the design imperative to be small. For the AEs, the solution to their strong-but-light dilemma is the use of efficient structures and efficient materials, which means those materials/structures possessing a high strength/weight ratio. For smartphones the solution is also to be efficient but instead of with weight, with existing surface area. For this reason I decided to calculate the ratio of screen area to total surface area for a number of phones to see what kind of progress we've made in how efficiently our phones fill out their dimensions with big beautiful screens. The factors affecting this ratio are generally:



1. General Design
2. Screen Size
3. Screen Aspect Ratio (shown in bold light grey)


The Android handsets are evolving the fastest and should offer a good look at where we're going. Let's take a look at three milestone (pun intended) Android devices. The ratio of screen-area:total-front-of-phone-surface-area is in orange; I'm going to call it "Areal Efficiency" or "AE" for short.

Phones scaled to equal height

I chose the G1 simply because it was the first Android handset ever, the Droid because it's launch was Android's true coming-of-age, and the EVO partially because it remains (in my opinion) one of the best spec'd android phones out there and partially because I suspected it would score very well in the areal efficiency dept. As you can see the trend appears to be solidly in the right direction, areal efficiency is going up; though unfortunately we'll see later that it's not as clear cut as these three phones make it appear, more on that later but for now lets move on to Android's stiffest competition and the device that ushered in the era of truly smart phones, the iPhone. 

While there are numerous makers of numerous Android handsets there is only one maker of iPhones, and only one new model comes out per year (until this year if the rumors regarding a VZW iPhone finally pan out). Apple is, of course, that maker and they appear to be committed to a fixed screen size and are definitely committed to supporting only one aspect ratio. These self imposed restrictions handicap the advancement of areal efficiency but are there for generally good reasons which we'll explore below. First let's see how the iPhone(s) does in the areal efficiency metric.




Not great but considering that it launched a year before the G1 this is yet another metric that shows the original iPhone to have been ahead of it's time though perhaps the mold was frozen a little too quickly.


Screens have a minimum bezel, that is the screen itself must extend beyond the actual pixels in order to function properly, reducing this bezel is no doubt one of the design goals of screen manufacturers but currently it requires all phones to have dead space between the edge of the screen and the edge of the phones surface as shown by the red line on the left edge of the iPhone 4 image, this (and other practical matters) limit the maximum areal efficiency of all phones to a value below 100%. This bezel does not scale linearly with the screen size however and so the bigger the screen the higher this maximal value of AE is. Apple's choice of a 3.5" screen seemed generous in 2007 but now seems rather small, this relatively small screen size is likely limiting the iPhone's areal efficiency to some degree. On top of screen size, there is also an optimal shape for phones and achieving it is also limiting Apple, lets take a look at how the basic aspect ratio (y dimension over x dimension) of the phone has evolved in the last few decades:


Obviously not to scale

The green outline is to highlight aspect ratio; when phones became mobile they became smaller because (of course) people had to carry them everywhere, but the aspect ratio generally stayed the same until the advent of smartphones when it decreased fairly dramatically. Having a screen with a standard aspect ratio (4:3 or 3:2 in that day) was probably a driving factor in decreasing overall phone aspect ratio and while there is room for further reductions in aspect ratio most manufacturers wisely seem to be preventing their devices from becoming too square. Apple likely has this in mind when designing the iPhone but their chosen aspect ratio of 3:2 has had (what I imagine is) an unintended consequence. In order to prevent the device from being too square, the iPhone has a lot of dead space on the top and bottom that could be filled by more screen but is not, the vertical height of the iPhone's single button creates it at the bottom but there is nothing save perhaps design aesthetics necessitating it at the top. Maintaining the screens aspect ratio is critical for the  continued backward compatibility of apps on the iPhone so future advances in areal efficiency will have to come from eliminating dead space on the sides and top/bottom, I believe Apple has already cut the distance from side-of-screen to side-of-phone to the minimum possible with the iPhone 4 (they've reduced it to a lower point than any other phone I'm aware of save perhaps the HTC Aria) so it will be interesting to see if Android or WP7 handsets really apply any pressure to Apple by achieving consistently high areal efficiency.

Back to the original Android handsets, from the three given it appeared that we were headed straight in the right direction but launches since the EVO have all been steps backwards. Here's a graph:


As you can see, AE peaked with the EVO and has declined since, I had high(er) hopes for the Droid X because it's hardware buttons allow extra space to be saved below the screen relative to the standard capacitive buttons but any savings were lost by including a wide strip above the hardware buttons that displays the VZW logo, despite this, Droid X scores second place out of all handsets. Also mildly disappointing was the Galaxy S line from Samsung which, while they are great phones, would have been better if they wasted less surface area, they unnecessarily mimic the symmetrical dead space areas on the iPhone's top and bottom without also mimicking the iPhone's thin side bezel and are put to shame by Motorola's Defy which is water-resistant, shock-resistant and beats Samsung's 4" screened offerings (AE 58.91%) with an AE of 60.55% on a 3.7" screen. It is possible that hardware constraints unknown to me are reducing AE on these two models but I rather doubt it as circuit boards that have very similar requirements to those of the Droid X or Galaxy S are currently being stuffed into much smaller designs such as Sony's X10 mini. While the Galaxy S and Droid X are a little worse than they could be the most egregiously inefficient design I've seen so far is Samsung's dual screen prototype. Honestly when the primary screen is ("super") AMOLED and you can shut off however much of it you like (making It smaller and reducing it's power draw) having a second screen is entirely pointless. This design annoyed me so much that I thought to myself "there should be some design metric that prevents companies from wasting time on such crappy designs so they can spend more time on good designs"... well here it is.

Just to show that maybe consumers have intuitively picked up on this metric without having consciously quantified it and to show that OEMs should probably be aware of it regardless of what some nobody writes in his blog let's add the iPhone to the above graph of Android handset AE values:


Coincidence that the first Android phone to be truly competitive with the iPhone was also the first one to exceed iPhone's AE? I would say only partially coincidental. Also perhaps indirectly indicative of this metric's possible significance is that Android's top two selling handsets are also one and two in terms of AE. I certainly don't think that making AE ubiquitous is going to change too much but I do think that if smartphone OEMs were a little more conscious of wasting surface area we'd all have slightly better phones.

In conclusion the whole point of this blog post is to make smartphones better, we consumers can have a say in smartphone design if we give the OEMs feedback and apply a little pressure, the best way to do that is to get review sites to grade each phone on relevant metrics, currently they are not graded on areal efficiency but if they were I think it would ultimately result in better designs and happier consumers.

Click here to send review sites an endorsement of AE as a smartphone metric.

Click here to sign a petition to get it included.

If you want to see the exact numbers used to calculate AE values or calculate your own here is the Excel spreadsheet.

Sunday, October 24, 2010

Africa

Hundreds of billions of dollars have been pumped into Africa since 1980 and it’s now poorer than it was then. What exactly did that money do?

The money built roads, schools, governments, wells and farms. It hasn’t worked. Why hasn't it worked? I think the answer is clear, It hasn’t worked because African culture doesn’t work. There. I said it, now I'll pause briefly so you can throw tomatoes at me... done? OK moving on... Why do we hold culture up as something intrinsically good and untouchable? Why is it taboo for me to say “culture is the problem” the only answer (I think) can be that we are afraid of such an objective eye being turned on our own culture. Eliminating culture from the possible causes of poverty has prevented us from addressing it. To address culture, we need to build people not roads. We need to build people not schools, we need to build people not governments. If we build roads they’ll get washed out, if we build schools they’ll crumble, if we build governments they’ll become kleptocracies. If we build people they will build and maintain roads, if we build people they will build and teach in schools, if we build people they will build and run government. If we build people they will build a better culture.


What is building a person? In order to build a person we first need a standard for them to be built to, what is a good person? What is the ideal man? ...And we need to be realistic, we won’t be able to build perfect people and we won’t be able to build the ideal man. But... if we can make “the standard” the goal of the imperfect person himself, then I think we have succeeded.


Here is a standard I think most will agree on. Someone who is honest, someone who cares most about things that matter most and least about things that matter least (I understand that "what matters most" needs to be defined but for now I'm just going to say some things clearly don't make the cut, like soccer) Someone who does to others what he would like to have done to himself, someone who works hard, someone who is loving, generous, kind, patient and persevering.

We now have a goal and a set of standards. The goal is to make it the goal of the individual to adhere to the aforementioned standards. Now why would he do such a thing, why would he rebel against a culture that is more comfortable with cronyism and chronic shortsightedness? What will inspire him to such heights of altruism? ...I know only one man for the job. Jesus. Jesus teaches the values that Africa needs. Changing African culture has to happen by first changing individuals and that doesn't mean abandoning the African arts or forcing Americanization down their throats. The futility of continued operations in Africa is an embarrassment, an embarrassment that is a reflection of the embarrassing contents of human nature. While it’s uncomfortable for some I really think Africa's need for God has become quite inescapable, as Matthew Parris writes in the December 27, 2008 issue of The Times: “As an atheist, I truly believe Africa needs God.”

When atheists start advocating God, there's probably something to it; Africa will benefit when we escape the traditional faux pas and go with strategies that actually work.

.............................................................................................................
OK I understand that writing doesn’t in and of itself help anyone but I had to get this one off my chest and I do plan to put hands and feet to these words.

Wednesday, October 20, 2010

HPalm Fails: Ignores the new Smartphone Paradigm

The HP (formerly Palm) Pre 2 has been announced but not yet launched. It will be a commercial failure.

How can I announce the failure of a product I haven't even touched? First let me add that I'm not saying HPalm won't make money on it (they may) then lets examine the flaws that scuttled the original Pre and have now been unhappily inherited by the Pre 2.

The original Pre launched with arguably the best OS on the market; better integrated and smoother than Android, more capable and flexible than iOS it was an absolutely world class operating system. The reasons Pre 1 was not the smashing success it could have been are:

1. Hardware shortcomings
2. Launching on a second-tier carrier
3. Mediocre Marketing


Point 3 is obvious if you've seen the downright weird commercials that accompanied the Pre's launch.
Point 2 is also obvious, Sprint's smaller user base means fewer potential customers, Sprint does not have the prestige of AT&T or VZW and the cutting edge consumers that Pre targeted are more carrier-prestige aware than most, also Sprint's network is inferior to VZW's and (arguably) AT&T's.
Point 1... this was the single biggest factor in the Pre's mediocre sales and is also probably the most open to debate... so let's talk hardware...

First let's start with what Palm got right. The Pre was the first top-tier smartphone to launch with an ARM Cortex A8 SOC. Definitely no mistake there. It also had 256mb ram, no mistake there; it's keyboard was well executed, no mistake; it looked attractive and fit nicely in your hand... again no mistake there. Palm nailed all the details but failed on the broader, more general points of design, though in fairness to Palm it may have been impossible to spot this flaw at the time the device went into production.

The first jet powered commercial aircraft, the de Havilland Comet, had four jet engines built into its wing roots. Later airliner designs had high wings and low wings, 2 3 or 4 engines mounted inside underneath and on top of the wing, t-tails and conventional tails, subsonic and supersonic cruise speeds. Now there is almost zero variation in modern passenger jets. They are all low-wing, conventional tailed, subsonic aircraft with two engines mounted under the wing. Sure there are a few models that still have four engines but they are the exception and none of them are selling very well right now. Cars have similarly converged to conformity, so have laptops, bubble gum and pretty much every other mature product. The reason all these products now look basically the same is that best has been found, and when you've found best... why go for anything else? Now best isn't the same for everyone so there is room for outliers in all these product categories, but outliers can only ever occupy niches, they cannot hold high market-share positions. With a 3.1 inch screen the Pre 2 is an outlier. The last year has seen a trend to ever increasing screen sizes. First the iPhones 3.5" screen seemed to be the standard, then the Motorola Droid launched with a 3.7" screen and was followed by a string of other 3.7" inch devices, then a slew of 4.0" and above handsets were launched. Currently the Droid X and EVO 4g are the screen size champs at 4.3" each and both are selling faster than HTC and Motorola can make them.
When given the choice between browsing the internet on a 3.1" screen and a 4.3" screen, everyone takes the 4.3" screen, this is hugely important because internet browsing is now a primary smartphone function. Similarly when given the choice between viewing photos or movies on a 3.1" screen and a 4.3" screen, everyone picks the latter again and while not as big a deal as internet browsing this is still a significant issue for many users. 4.3 may not sound that much bigger than 3.1 but you have to keep in mind that this is area (dimensions are squared) so actual screen area on the EVO is (4.3^2/3.1^2) 92% greater than the Pre's (ignoring aspect ratio differences). The advantages of a big screen are numerous and the advantages of a small screen are few, there's a good reason why screens have been trending bigger, it's because they're better at most things for most people. Screen size alone, makes the Pre 2 an "also-ran" device, I would guess that only 3% of prospective smartphone buyers would consider a phone with such a small screen.
The hardware keyboard is also a factor. Apple's original iPhone was greeted with much skepticism for it's lack of a physical keyboard, it was an extremely edgy and innovative move by Apple. And it paid off, good touch-screen keyboards work better than most physical keyboards as I found out after purchasing a D1, this is why most high end smartphones fore-go the physical keyboard. There's still a decent market for smartphones with physical keyboards but it represents only about 10% of the market.

With the screen eliminating 97% of prospective buyers and the keyboard leaving another 90% less than enthused I've got $20 that says the Pre 2 will not achieve greater than a 1% market share. HPalm has ignored the new Smartphone Paradigm, they have not aimed for best in all categories and so they will not be successful. Better luck next time HPalm.

Wednesday, October 6, 2010

What is Software's Jurisdiction?

Computers will seduce you, they will work perfectly every time for ten years. And then one day. They'll kill you.
-Doctor Art Draut,
Test Pilot 
PhD-Physics
Professor of Aerodynamics and Computer Programming

As a flight instructor, one of the first things I teach my students in the Cessna 172 is how to "lean" the "mixture" correctly. Most aircraft have separate controls for the throttle (black) and the mixture (red). The throttle controls how much fuel and air (unless the mixture is full out, in which case the cylinders get all air and no fuel) get sucked into each cylinder during the intake stroke of the Cessna's four-stroke reciprocating engine. The mixture controls exactly how much fuel is added to the air on its way into the cylinder. Cars originally didn't have mixture controls because they usually stay at around the same altitude, this meant that at higher elevations or on hotter days the cars were burning fuel inefficiently because they were adding the same amount of fuel at 4,000' as they did at sea level; at 4,000' there is less air and so less fuel is required to achieve the optimum fuel/air ratio. Later with the advent of electronic fuel injection automakers quickly realized that they could add a chip to the engine that sensed air density and adjusted the mixture perfectly to ambient conditions for each individual intake stroke of the engine. This significantly increased both engine performance and engine efficiency.

The auto industry in the United States is about 700 times larger than the light aircraft industry (by revenue) and the makers of aircraft engines did not have enough money to redesign their engines with electronic fuel (mixture) control, as a result of this, airplanes are stuck with vintage 1950s engines that still have mixture controls, many old school pilots are very happy about this because they simply do not trust electronics. There is a fundamental trade-off that occurs when you put a computer in charge of something. Performance goes up (hopefully), but complexity goes up as well; when you add electronic fuel control to an engine you are adding one more part that can break and kill your engine. Electronics however, are very reliable, a friend of mine is an engineer employed by Cessna and he reports that mechanical components on Cessna aircraft must have a reliability rating of 10^-5 (1 failure in 100,000 cycles) or better, while electrical components must have a reliability rating of 10^-9 (1 failure in one billion cycles) or better. Electronics are especially reliable when they only have to do one thing and cannot be told to do other things, such as the chips that control the mixture in jet engines (which all jet engines in the sky today have). In the case of the Cessna's engine, I believe handing the job of leaning the mixture to a computer is a no-brainer, I lean the mixture several times a flight but a computer will do it more than a 1,000 times a second. Whenever the engine is at a less than optimal mixture setting it makes the engine more likely to suffer failure in the future so my relatively infrequent leaning is comparatively bad for engine health, thus in this specific case the risk of failure will likely go down.

Moving on from the Cessna to more serious hardware, the problem with electronics is that the more things you want them to do the more likely it is that they will fail. The chips controlling the mixture on a GE-90 have never failed according to the NTSB database but Windows (which is designed to do pretty much everything) fails everyday. Windows not only has the disadvantage of being more complex because it's designed to do everything but it also has the disadvantage of being externally accessible. If your computer is physically connected to the internet then it is possible for your computer to get hacked.

Continuing to even more serious hardware, the flight management system on modern fighter jets is both complex (it must be aware of and correct for temperature, pressure, landing gear position, payload door position, payload weight, fuel weight, gps data, flight regime, other aircraft, pitch, bank, roll, yaw, radar data and much much more) and accessible. Modern fighter jet FMS systems actually run an operating system that is upgradeable and to be upgraded it must be accessible. It is only "internally" accessible however, (to my knowledge anyway, it is really just an educated guess) you must physically connect to the aircraft to modify its software. It seems comical that software, something we usually associate with Microsoft Windows and unicorns named Charlie; could cause something as deadly serious as an air superiority fighter jet to crash, but that is exactly what happened to the world's best fighter jet (a prototype) in 1992 and then to a production aircraft in 2004.

Continuing to even more serious hardware the american electrical grid is planning to "get smart." Last year the Obama administration set aside 3.4 billion dollars (for a total of 11 billion now) to make the electrical grid smarter, a move which the DOE calls a necessity. It is estimated that intelligent grid control could make the system 10% more efficient... and guess what communications network is to be used to control this new dynamic system? That's right, the internet.
Let's pause for a second and think about America's electrical grid... Generating about 4 Trillion Kilowatt hours of electricity a year and powering hundreds of millions of appliances the electrical grid quite literally keeps us alive. Think I'm exaggerating? Let's think about what electricity does for us: it powers our computers, phones and lights, our freezers, refrigerators and home heating/cooling systems, our traffic lights, gas stations and air traffic control system. Maybe you don't need any more convincing but the electrical grid is a serious piece of hardware. If you dabble in the safety sciences you will frequently see a chart like this one:
Software first crept its way from the bottom left of this chart to the bottom right of this chart. Now it is climbing up up from the bottom right toward the top right corner. While very few dispute that software makes cars, Cessna's and jet engines better, it's use in fighter jets might give you pause. There is a concept in the safety sciences called "blood priority," it basically states that even though people know a problem exists, they won't fix it until it kills someone. As people get more comfortable with software, they allow it more leeway, as a (amateur) programmer myself I love software, but it cannot solve everything, and it cannot be held responsible for anything. At some point or another, if we continue to allow software to expand it's jurisdiction into more and more of our lives we will reach a point of dangerous reliance and/or vulnerability. This is exactly what Doc Draut meant when he uttered the quote at the top of this page.
He does not mean that everyone will get killed by a T1000 in ten years, he simply means that we humans need to remain ultimately in control of the software that runs our lives. And nobody is in full control of the internet so I don't believe allowing it to run a life-critical system like the electrical grid is an acceptable risk. Referring to the above chart again I think everyone agrees that the potential severity of a nation-wide electrical failure belongs in the "catastrophic" row. The question is, what column does it belong in? Lockheed Martin seems to think it is not in the far right column.

In the interests of not sounding like a prophet of doom I would like to end this post with this link.

Monday, September 13, 2010

Responding to Failure

In general, it seems that we people allow personal failures to dictate to us how they should be viewed, it's like failure takes on a life of its own and we cower away from it.

It would be impossible to accurately quantify, but I think it is likely that destructive responses to failure cost the global economy well over 100 Billion annually and produce a significant percentage of instances of poverty, depression, and domestic abuse.

The problem is our default response to failure. Instead of automatically being placed in a glass cage and observed from an objective perspective, failures are somehow automatically granted the freedom to run amok in the unchecked subjectivity of a person's worried thoughts.
Failure is a self-exciting problem as the emotions "fear" and "worry" (which go together, perhaps they don't need to be listed separately) tend to prevent us from seeing the situation objectively. Because we can't see the failure objectively, it's severity is magnified by our fear and worry which causes us to be more fearful and worry more. It's hard to take advice from people when the fear-and-worry magnified stakes seem very high and you feel sure of your own perception of the situation, thus help from outside objective sources may fail to find purchase in your thought process (and will cause your friends to... worry about you). I think the best internal solution to this dilemma is to zoom out to your entire life and ask yourself "what is the meaning of life?" If your failure has nothing to do with the answer to that question then your failure cannot be very important even though it may still feel important. Failures, once they have been recognized as weightless can then (maybe with some time) be viewed from an objective perspective and real lessons can be learned from them. Most failures are the result of us holding an incorrect view of some portion of reality, failure helps us match our perspective of reality, with true reality and so failures can be far more instructive than success if we deal with them correctly, as the famous British scientist Sir Humphrey Davy said: "I have learned more from my [failures*] than from my successes."

Personally I believe that the meaning of life is to glorify God by living as I was designed to, from that perspective only moral failures carry any weight. Therefore (only) moral failures have the objective right to hijack my emotions... which is a good thing, emotions were invented for a reason. Strong emotions are naturally associated with their causes by our brains and stick in our memories much better than individual events and facts do. The strong emotional reaction to such true failures should be the result of such failures; on top of being the right response it also equips us with a strong memory that provides future motivation not to fail again.
Even (maybe especially) true failures however can be taken out of perspective. If we allow such failures to dominate our thoughts for too long and never trace the failure back to it's cause and put some real thought into how to prevent such failure in the future, then we are failing to respond appropriately to the failure. If you are reading this and thinking about a failure, that failure needs to be viewed as what it really is. It is an event in the past (probably). The past is good for learning from, it is good for analyzing, but it is not good for living in. It is not good to allow natural emotions to overstay their welcome, at some point we must show them the door or they will prove destructive to our lives. There is a balance somewhere between too much emotion and too little emotion, I won't even begin to attempt articulating where that balance is but I think most people would agree it exists. This post addresses the problem of allowing emotions to have disproportionately great authority, they certainly deserve some authority but not at the complete expense of your left brain. Maybe later I'll share my thoughts on emotional deficiency... or maybe I'll get a friend with a higher emotional IQ to do it ;)


*Original word: "Mistakes"

Sunday, September 5, 2010

why Tesla Motors will probably fail


Don't get me wrong, I love Tesla, and Elon Musk is probably the most capable entrepreneur on the planet. But Tesla and Elon have picked a losing power train architecture. Battling this disadvantage may prove impossible.


Expectations rule our reactions. Someone with low expectations will react positively to an event that someone with higher expectations may react negatively to. Because of this, the level of expectation people hold regarding a product will dictate in large part the success of that product. Ideally a marketing firm will be able to convince the public that the product is worth buying before the product is available. And when the product is available it should exceed the expectations set by the marketing firm by some degree creating positive impressions on early adopters who will share those impressions with friends thus giving free publicity to the maker of the product. Of course for this to work the product actually does have to be very good.

People have existing expectations about cars, Tesla cannot change them much. They expect cars to be able to go from A to B, they expect cars to be comfortable and to always be ready to go any distance. They don't want to have to think at all about whether their car will be able to do what they want it to, and they don't. Because modern cars are that good. The biggest quibbles that car reviewing companies usually have with any new car model is something like the suspension isn't tight enough or the speedometer lighting is a funny color, you just don't see things like: this car just doesn't go from A to B reliably. We simply take for granted that the basic functioning of the car will always be flawless and available. This is not the case for Tesla Motors products. 

First a bit about batteries. Tesla made the decision to power their vehicles with off-the-shelf li-ion batteries. Six thousand eight hundred thirty-one of them to be exact. The most critical performance metrics relating to automotive batteries are energy density (how much energy a battery can hold per unit mass or volume) and power density (how much power a battery can produce per unit mass or volume) and cycle life (how many times a battery can be charged and discharged before decaying to 80% of original capacity). Energy density rules the cars range for a given battery pack size. Power density rules the cars potential performance for a given battery pack size, and cycle life (along with energy density) determines how far you can drive before you need to replace your battery pack, or as Tesla would say, your Energy Storage System (ESS).

Li-ion batteries are energy density champions. They can hold at least 5x the charge of lead acid batteries for a given mass and about 2x the charge of Ni-MH (Nickel Metal-Hydride) batteries per unit mass. Critically, typical Li-ion batteries have a rather poor cycle life of about 500 cycles, more on that later. Li-ion cells are only average when it comes to power density, but power density itself is not really an issue as a large battery pack (like Tesla's 53 Kilowatt hour one) of almost any current chemistry can make more power than most suitable electric motors can handle. But there is a flip side to power density. How fast a battery can discharge it's energy (power density) is usually directly proportional to how fast a battery can accept energy (get re-charged). And how fast a battery can get recharged is another critical metric for electric vehicles. This is the Achilles heal of pure electric cars right now (though  there is a solution to be discussed later). It is unacceptable to the average consumer to have to wait for a minimum of three and a half hours while their car re-charges... and this is a best case scenario with a special 240V 70A charger. Charging from a normal 20 amp outlet will take more than a full day

The unacceptable time penalty should such a car run out of juice creates an anxiety in the driver that has been dubbed "range anxiety" by pundits. In a great example of an action that is legal but not ethical General Motors has just applied to trademark the term "range anxiety" (note: hey General Motors, you didn't invent that term, grow up) and Tesla's response was less than honest as well. Here it is:

"By all means, GM can have 'range anxiety.' To Roadster owners, the term is as irrelevant as 'gas stop' or 'smog check.'"

But the term is not irrelevant to Roadster owners. If you were to poll (Tesla) Roadster owners about whether they would prefer a car that recharges in three minutes or a car that charges in three and a half hours I think everyone knows what the outcome would be. I sincerely doubt that a significant percentage of consumers are willing to embrace the possibility that their expensive new car will be out of commission for at least three and half hours after a long drive and that cross-country road-trips are now impractical. Our expectations are already set. Such inconvenience is not acceptable.

Back to batteries. Trying to make the perfect battery is like trying to balance the Rocket Equation (something Elon Musk has experience with). Increasing energy density will decrease cycle life and/or power density. Decreasing charging time will decrease energy density and/or cycle life. Batteries are sensitive to heat and cold, they are sensitive to high discharge rates, they self-discharge slowly over time and cannot be stored indefinitely, deeply discharging the batteries or charging them to 100% also lowers cycle life. To combat this Tesla has developed a sophisticated liquid cooling system for their batteries that keeps the batteries at the right temperature but itself takes energy from the batteries to power. By using a large battery pack they also do not need to worry about high discharge rates damaging cycle life. They have also prevented the cells from charging to above 95% capacity which will benefit cycle life. But they still cannot escape the limitations of the Li-ion chemistry they've chosen. Tesla claims a range of 244 miles and a total battery life of 100,000 miles. This is not bad but is worse than a gas engine, it also tells us that Tesla estimates the cycle life of their batteries to be around 500 cycles which is right in line with what the average Li-ion cell gets. Owners will likely be less than enthused to learn that a new battery pack will (almost certainly) cost more than $40,000. But Tesla does have other cell options.

The choice of standard (Co cathode) li-ion cells looks shortsighted to me. Alternative Li-ion chemistries such as the iron-phosphate of A123 Systems or the titanate spinel of Altairnano would have been wiser long-term decisions. The reason for this is that while they have ~35% lower energy density (or worse for Altair) they have 10x the cycle life (Altairnano claims 30x the cycle life) meaning that battery replacement could have been every ~650,000 miles instead of every 100,000 and (critically) they can both be fast-charged in less than 20 minutes. Fast charging such a huge battery pack however does bring problems of its own. To charge a 53KWHr pack in 20 minutes would require more than 660 amps at 240V. That kind of power is simply not available at the average residence or business and has serious hazards associated with it. For this reason I think that gas stations will continue to serve a roll as fast charge stations even in the event that electric cars completely conquer the consumer automotive market (they will not conquer the semi-trailer market for a very long time if ever). All this to say that building standard Li-ion battery packs into their cars is building bad-publicity into their future. Owners will not be happy with a $40,000 battery replacement bill when there are alternatives. And there will be alternatives.

The best alternative is the serial hybrid power-train architecture. I'd bet money that this will be the power-train of choice until batteries have roughly doubled in performance and can provide gas-like range with sub-twenty minute charge times at a reasonable price. The serial hybrid is the power-train architecture that was chosen for the Chevy Volt and it brings the efficiency advantages of the pure electric without losing the convenience of gas. Unlike the Prius which is a parallel hybrid, the Volt carries an engine that does not directly power the tires (update 10/11/2011 GM just revealed that it actually can in some circumstances). The engine will only kick in to power a generator or alternator (I'm not sure which it is) that will in turn be capable of directly powering electric motors and charging the battery pack. Converting power to electricity before sending it to turn the wheels results in ~10% efficiency loss but that is made up for by the engine being able to run at it's most efficient rpm and manifold pressure at all times. The battery is big enough to power the car unassisted for ~40 miles but when it is depleted the car can simply switch to the gas engine. Most people drive less than 40 miles a day so most people will still use zero gasoline but they still retain a freedom from range anxiety and they don't have to bear the cost of an enormous battery pack. Because less total energy is required, it is easier for the manufacturer to choose a chemistry that has greater cycle life and is more tolerant of heat and cold. Essentially the serial hybrid brings 90% of the pure electric's advantages while retaining none of its disadvantage. This is the route GM has taken, this is the route that my least favorite start-up, Fisker Automotive, has taken; and I expect all other major manufacturers to announce serial hybrid designs within a year or two. Tesla has secured massive funding from the DOE and elsewhere but at some point they're going to have to turn a profit. At some point they're going to have to achieve significant market share. To do this, they need a serial hybrid or a breakthrough in battery tech. I wouldn't hold my breath for the latter.


Edit 4/22/2013: I have changed my opinion on this issue, I now believe that Tesla will succeed brilliantly, I have been long TSLA since early March and have advised close friends to do likewise. My speculation regarding battery technology in this post was incorrect, unsurprisingly Elon knew what he was doing.

Thursday, September 2, 2010

Hey Epicurus!

The problem of pain is often used by laymen as an instrument to attack God's goodness, His existence, or both and it used to be considered by most philosophers and theologians to be the strongest such attack. For whatever reason this question always adopts a narrow perspective which leads to a false conclusion. Drawing conclusions from narrow perspectives is like zooming in on one of the few downhill sections on the trail up Mt. Whitney, and concluding that it is a downhill stroll to the top of the mountain. 


To come to the correct conclusion regarding the nature of the trail up the mountain you have to look at the entire trail. Similarly to come to the correct conclusion regarding the experience of pain in life you have to look at the entire life and ask yourself the question: As a human being, is being better than not being? All people who call themselves rational and have not taken their own life must answer in the affirmative. If you answer in the affirmative, then the goodness of our life and existence outweighs the badness and God is probably good. The bad we experience is a necessary consequence of our free will, which enables both good and bad; and is itself, a superior good.


It is still very interesting that God would make us with free will. Why not make us God-glorifying automatons? Well...  maybe he was bored of making those☺ as that is how He made the vast majority of the universe. The heavens declare the glory of God and they have absolutely no choice in the matter.

Monday, August 30, 2010

Why not down?

Up is always considered better than down. Thumbs up is good thumbs down is bad, heaven is up hell is down, glory is up shame is down, success is up failure is down, high flying is better than low crawling. Why is up always better than down? And why not settle for the status quo or even exult down? 
Implicit in this universal sentiment regarding up/down is an affirmation of the value of effort and a nod in the direction of hard work. After all, down is the natural way to go, down is the easy way to go. It's interesting that up is also exalted as better than neutrality. This motif of exerting effort beyond that needed to simply exist as being a good thing speaks to a calling present in the heart of man that is higher than the calling to simply exist. The exultation heaped on what is essentially defying the second law of thermodynamics is an affirmation that we were designed to transcend the natural laws, to identify them and manipulate them, to understand them and harness them, to harness them for a purpose that transcends them.

Thursday, August 5, 2010

RIM wields a Torch in a world of Flashlights

So Research In Motion, Canadian makers of the Blackberry, launched their new flagship device today and called it The Torch. The Torch is the biggest letdown since last time RIM launched a flagship device and verifies the opinion I've held for about a year that RIM's days of ~40% smartphone market share will never return once they leave, and they will leave soon.

If RIM keeps trying to make their old formula of a portrait hardware keyboard and a crappy screen work they'll be insolvent before the end of the decade, though I understand their hesitancy to drop a formula that worked well for so long. The truly inexplicable part of the Torch's failure is that while it debuts the much ballyhooed Blackberry OS 6.0 it uses the same SOC as the Blackberry 9700 and a 3-years-behind-the-times sub VGA screen. I'm not sure how RIM's management came to the conclusion that they should equip their "Flagship" device with a micro-architecture from 2002 (no really, that's how old ARM11 is) fabricated on process technology from 2006 (though smartphones in general are all a node behind here) and running a resolution that is 400% lower than the iPhone 4s and 260% lower than most Android handsets. RIMs new Webkit browser is almost as much better than RIMs old browser as it is worse than Androids latest. It's useable unlike RIM's old browser, but about half as fast as the latest iPhone's and less than half as fast as the Droid 2's. Email is apparently flawless as it always has been on Blackberry but this doesn't really matter when everyone has flawless email. A large increase in advertising has accompanied the Torches launch though the advertising is for BBM rather than the Torch itself in perhaps a tacit acknowledgement of the devices shortcomings.

Message to RIM: We want better phones not more advertising!

Thursday, July 15, 2010

Why Kin Failed and the new Smartphone Paradigm

What's a Kin??
Microsoft launched the "Kin" phone on April 12th this year with two models imaginatively dubbed "Kin One" and "Kin Two" ...six weeks later, the project is dead.

Estimated to have cost Microsoft north of 1 billion to develop, the Kin is the most spectacular product failure the tech world has seen in a long time. While Android handsets are flying off the shelves at a rate greater than 160,000 units per day (Update: as of July 20th that number is now greater than 200,000 per day) , Microsoft's Kin has sold fewer than 8,000 handsets in the six weeks since launch. How did Microsoft manage to fail so successfully? read on to find out.

How to Fail: Part 1
The Kin is the result of Microsoft's "Project Pink" a phone initiative that grew out of Microsoft's purchase of Danger Incorporated. Danger was a 2000 Palo Alto start-up that got an early foot in the mobile cloud-computing door and produced the popular "Sidekick" line of somewhat-smart... phones. When Microsoft purchased Danger in 2008 they allowed Danger's top executive, J Allard, to continue development on what was now a new Microsoft product. The story get's a little fuzzy here but Engadget reports that sometime between Allard getting appointed Project Pink lead and the launch of the Kin, Allard was forced out of the project and it was taken over by Windows Phone 7 lead and Microsoft Senior VP Andy Lees who also continued to lead WP7 development. Windows Phone 7 was way more important to Microsoft than Project Pink and the Rumor is that Lees did not think Microsoft needed two completely separate phone initiatives and did not really care where Project Pink went. How to fail? fire the visionary and replace him with someone who has more important things to do.

How to Fail: Part 2
Reviews of the kin were oddly bi-polar. I don't mean to say that some reviews were very good and some reviews were very bad, I mean to say that all reviews absolutely loved some aspects of the Kin and absolutely hated other aspects, no really; ALL of the reviews were like that, go read one. With a name like Microsoft behind them the Kin are touchscreen phones that on first blush appear to promise competent implementations of all the features you'd expect from a high caliber smartphone. They don't offer all those features however and Microsoft never intended them to, they are intended to be the Sidekicks spiritual successor and target the frequently texting tween/teen demographic. Unfortunately for Microsoft the targeted demographic moved on while project pink was in development and like every other demographic on the planet now wants a phone that is great at texting AND great at lots of other things ala Android or iPhone. There is no point in walking around with a Kin when the same monthly payments will get you a Droid, it was absolute suicide of Microsoft and Verizon to require a full data plan for Kin. How to fail? Target a 2008 demographic in the year 2010.

The New Smartphone Paradigm. Why Kin never had a chance.
The iPhone launched in 2007 to huge fanfare and even huger success. I have to hand it to Apple for managing to coax a smooth GUI out of the abysmally slow 400mhz ARM11 processor that the original iPhone launched with. It was accomplished by using the GPU to power 2d transitions instead of confining it to it's traditional roll of only 3d rendering work. The next year Google launched the G1, a phone that while hugely capable was hampered by heinously ugly hardware and horribly choppy software. It captured a profitable but insignificant 1% of the smartphone market as the iPhone climbed through 15% market share. The iPhone continued to dominate and single-handedly turned around the fortunes of AT&T and their crappy GSM network. AT&T achieved higher growth, higher average profit per customer, and a higher percentage of smartphone users than  the other 3 major wireless carriers all on the back of the iPhone. While Apple's hugely effective marketing definitely helped sales, the iPhone gave people something that they desperately wanted, the internet everywhere and always. When it first launched the iPhone was in it's own more expensive pricing tier but that has changed. All four major wireless carriers now have identical monthly charges for all their smartphones (Though Sprint charges $10 extra for 4g capable models). When the total cost for a two year contract is running you 2-3k it's a pretty easy decision to fork over another $50- $200 at the start of the contract for a phone that does almost everything your laptop can do plus a lot of things it can't do. When the price premium to get the absolute best on the planet is less than 10%, most people will do it, especially when they will spend more time interacting with the device in question than with any other device. Contrast this with planes, trains, automobiles, houses, computers and pretty much everything else where the best costs thousands of percent extra. This new paradigm polarizes sales. The best win by even more, and everyone else loses horribly. This is why Verizon ran out of Droid Xs (after claiming they wouldn't run out no less) and Droid Incredibles and Droids. This is why Sprint ran out of Evo's, AT&T ran out of iPhones and T-Mobile ran out of nothing because they inexplicably have failed to provide a high end device to their customers. Which brings me back to the Kin. The Kin was powered by an anemic ARM11 cpu of still undisclosed clockspeed with a crappy low-res screen and a pretty but choppy user interface, the all-important web browser was described by Anandtech as "abysmally slow" and while usable, it was less than half as fast as other internet-everywhere smartphones.

It would have been cutting edge in 2007 but in 2010 it's just an embarrassment. Microsoft was hoping the Kin would be seen as the smartest dumb phone but it was clearly the dumbest smartphone. Particularly damning was it being priced alongside other far more capable smartphones. Who in their right mind would pick up a Kin when the Droid Incredible with it's 1ghz cortex A8 cpu, high-res OLED screen and Android goodness is the same monthly price?

Uninspired leadership, targeting non-existent demographics, and bringing a knife to a gunfight spelled doom for the Kin. Windows Phone 7 is looking promising though and the Kin is only strike two for Microsoft. We'll see if Windows Phone 7 can play in the big leagues this winter.