Wednesday, September 25, 2013

Pervasive NSA surveillance + civil forfeiture = U.S.-flavored totalitarianism?

When Edward Snowdon's revelations about pervasive NSA surveillance first came to light, I thought the worst thing that could happen would be for people to be faux-raged for a little while and then to turn their attention to the next big news story. Has that already happened? Sometimes I think so, sometimes I don't.

What I'm sure of is that, even as the revelations keep coming, I keep hearing from smart, educated, responsible, thoughtful people -- friends, family, and acquaintances -- that pervasive surveillance is old news; not a meaningful invasion of privacy; and/or a 'necessary compromise' to keep evildoers in check.

It's not a gimme to push back against these arguments. It's complicated. There are multiple aspects of what's worrisome about a pervasive surveillance state, some of them are related in non-obvious ways, and even the most avid newshound is hobbled by the simple truth that civilians and even most experts are working from incomplete information. The reasons we ought to take pervasive surveillance seriously are complex. Some of the complexity is technical, while some is social or political.

I think that several months after Glenn Greenwald first broke the Snowdon leak story in The Guardian, it makes sense to examine aspects of the most consequential issues his leak raised, through some of the best journalism that has emerged since. By "best" I mean "most clarifying" or "most illustrative"; there's some wildly speculative and hyperbolic muck out there ... and while I recognize that not everyone will award golden "most clarifying" stars to the same pieces I do ... well, that's why there's a comments section.

I'm not going to try to examine all the important aspects of pervasive NSA surveillance. I'm not that smart. And there's no room for that thorough an examination in a single essay, not even this ridiculously long one!

Here are key points I'll touch on in this post:
  1. The data that's being gathered reveals an enormous amount about an individual's activity in social, economic, and political spheres.
  2. Surveillance data being harvested today places ordinary people at risk of persecution by the present and any future government.
  3. Nefarious use of surveillance data could easily look like current "civil forfeiture" practice applied to ordinary people.
  4. The strategies used by the NSA to enable pervasive surveillance may have already undermined the trust and security of the internet itself, on which enormous sectors of economic, political, and social activity depends.

Metadata mining is much more invasive than airport body scans

Days after the Snowdon leak story broke, I posted Not your granddaddy's metadata: don't believe the PRISM anti-hype, in which I pointed to expert opinions and studies indicating how much can be learned about a person's activities from a very little bit of metadata. Since then, this topic has been treated extensively in many public forums, so it would be silly to belabor the point.

However, a very clever analysis just came to my attention a week ago (thanks to B-- and S-- of Madison, WI -- which is probably enough metadata for the NSA to figure out to whom I'm referring, if they care).

The analysis is worth sharing.

In Using Metadata to Find Paul Revere, Kieran Healy, Associate Professor of Sociology at Duke University, details in farcical form how social network analysis (SNA) -- an analytical technique applicable to social media and similar metadata to discover roles and relationships in any given group of people -- might have been used by the British in the 1770s to unmask (and perhaps nip in the bud) Paul Revere's catalytic role in the American Revolution ... if the Redcoats had actually known how to perform SNA.

The gist is this: applying social network analysis techniques to eighteenth-century data about memberships in seven Boston-area organizations -- covering a mere 260 persons in toto -- surfaces Revere's importance as a central, brokering, key individual in the mobilization that led to the revolution that freed the United States from British subjugation. Had they been in possession of information surfaced by SNA, a British special ops team, had one existed at that time, might have set out to garrote Paul Revere in order to disrupt, and perhaps incapacitate, revolutionary activity in Boston.

Here's how Prof. Healy puts it in a faux-18th-century voice:
So, there you have it. From a table of membership in different groups we have gotten a picture of a kind of social network between individuals, a sense of the degree of connection between organizations, and some strong hints of who the key players are in this world. And all this—all of it!—from the merest sliver of metadata about a single modality of relationship between people. I do not wish to overstep the remit of my memorandum but I must ask you to imagine what might be possible if we were but able to collect information on very many more people, and also synthesize information from different kinds of ties between people! For the simple methods I have described are quite generalizable in these ways, and their capability only becomes more apparent as the size and scope of the information they are given increases. We would not need to know what was being whispered between individuals, only that they were connected in various ways. The analytical engine would do the rest! I daresay the shape of the real structure of social relations would emerge from our calculations gradually, first in outline only, but eventually with ever-increasing clarity and, at last, in beautiful detail—like a great, silent ship coming out of the gray New England fog.
But perhaps that's too whimsical or allegorical an approach for flinty-minded readers.

In that case, I recommend an academic paper (to which Prof. Healy links in an afternote to his piece) by Shin-Kap Han, an Associate Professor of Sociology at University of Illinois at Urbana-Champaign: The Other Ride of Paul Revere: The Brokerage Role in the Making of the American Revolution (PDF). This dense, 20-page treatment with tables, graphs, 19 footnotes, and dozens of cited references, was published in June 2009 in Mobilization, a "a review of research about social and political movements, strikes, riots, protests, insurgencies, revolutions, and other forms of contentious politics" run out of San Diego State University. The review's purpose "is to advance the systematic, scholarly, and scientific study of these phenomena, and to provide a forum for the discussion of methodologies, theories, and conceptual approaches across the disciplines of sociology, political science, social psychology, and anthropology."

Prof. Han's article builds on membership data about five organizations to which Paul Revere and 136 others belonged (a subset of data Prof. Healy used). His paper describes through detailed illustration and analysis of this data how SNA is applicable to real-world activities, and how a seemingly small quantity of metadata can reveal a very great deal indeed.

What government surveillance means to you, today and in the future

In the case of Paul Revere, the revelations provided by sparse metadata comes long after the fact of his political activity. But the same methods apply to individuals alive today (and tomorrow), and an enormously greater body of metadata concerning today's activities is available to those -- like the NSA -- who collect it.

For a summary of what is known about what the NSA is collecting, I'd recommend the Electronic Frontier Foundation's How the NSA's Domestic Spying Program Works and the ACLU's A Guide to What We Now Know About the NSA's Dragnet Searches of Your Communications (the latter report is dated 9 Aug 2013). My summary of key elements of this week's bottom line includes:
  • metadata about telephone communication (names, addresses, detailed records of calls) is being vacuumed up by the NSA;
  • the NSA has real-time surveillance access to just about everything that a typical person does on the internet, and search tools that make it possible to zero in on any name, e-mail address, or IP (computer network) address, etc. that an analyst wishes to examine ("without prior authorization"), whether that data/activity originated in the United States or elsewhere;
  • the NSA is building a huge ($2Bn) data facility in Utah to store the data it has been collecting over the past decade or so and into the future.
What this means to your average person may be best summarized by Edward Snowden himself, in a widely viewed video interview published by the Guardian on 9 June 2013. The following is my own transcription of what Snowden said beginning at 7'12" into the interview:
... even if you're not doing anything wrong you're being watched and recorded, and the storage capability of these systems increases every year, consistently, by orders of magnitude to where it's getting to the point you don't have to have done anything wrong, you simply have to eventually fall under suspicion from somebody, even by a wrong call, and then they can can use this system to go back in time and scrutinize every decision you've ever made, every friend you've ever discussed something with, and attack you on that basis, to sort of derive suspicion from an innocent life, and paint anyone in the context of a wrongdoer.
Will the grim picture Snowden paints necessarily happen?

Well, no. If the people who hold the power to "derive suspicion from an innocent life, and paint anyone in the context of a wrongdoer" decide not to exercise their power in that way, then it won't happen.

But -- even if you trust the current U.S. government to do the right thing today -- you need to ask yourself whether you similarly trust next year's or next decade's government (details persons and policies TBD) to take a similarly trustworthy approach.

As they say in the investment world, past performance does not predict future returns.

If pervasive surveillance data is collected, and stored, and accessible to analysts, then whatever agency or agencies have the data and tools also have the means to "derive suspicion from an innocent life, and paint anyone in the context of a wrongdoer." Any agency or agencies who have the data and tools. Not just the ones whose politics and policies one might like.

Whether this is worrisome enough to do something about it is a political and sociological call that each of us as individuals and citizens, and we collectively as a nation and society, need to make.

Imagining nefarious use of surveillance data: consider civil forfeiture

It's hard for many people to imagine the path from the United States they inhabit to a nation with a Soviet-scale gulag or to the world depicted in Neill Blomkamp's dystopian thriller, Elysium. It's therefore useful, I think, to consider repression at less dramatic scale. Doing so helps one put police-state-creep into real world perspective.

An article titled Taken, by Sarah Stillman in The New Yorker of 12 August 2013, takes a hard look at certain current practices of some state, county, and city law-enforcement agencies. These practices fall under the general category of "civil forfeiture."

What is "civil forfeiture"? In a nutshell, quoting from the sub-title of Stillman's article:
Under civil forfeiture, Americans who haven’t been charged with wrongdoing can be stripped of their cash, cars, and even homes.
In Taken, Stillman describes the experience of American citizens and residents whose property was seized under circumstances that are functionally indistinguishable from being forced to pay authorities a bribe to be released from a police investigation and/or a threatened prosecution. But not an illegal bribe. Civil forfeiture sufficiently conforms to the letter of the law that it's difficult or impossible to fight for many individuals whose legal property is taken from them by agents of law enforcement.

The examples Stillman gives in her article take place in Texas, Oklahoma, Georgia, Arizona, Washington, D.C., Pennsylvania, Virginia, et al. In other words: all over the country.

Here's how civil forfeiture works in greater detail, again from Stillman's article:
The basic principle behind asset forfeiture is appealing. It enables authorities to confiscate cash or property obtained through illicit means, and, in many states, funnel the proceeds directly into the fight against crime. In Tulsa, Oklahoma, cops drive a Cadillac Escalade stencilled with the words “this used to be a drug dealer’s car, now it’s ours!” In Monroe, North Carolina, police recently proposed using forty-four thousand dollars in confiscated drug money to buy a surveillance drone, which might be deployed to catch fleeing suspects, conduct rescue missions, and, perhaps, seize more drug money. Hundreds of state and federal laws authorize forfeiture for cockfighting, drag racing, basement gambling, endangered-fish poaching, securities fraud, and countless other misdeeds.

In general, you needn’t be found guilty to have your assets claimed by law enforcement; in some states, suspicion on a par with “probable cause” is sufficient. Nor must you be charged with a crime, or even be accused of one. Unlike criminal forfeiture, which requires that a person be convicted of an offense before his or her property is confiscated, civil forfeiture amounts to a lawsuit filed directly against a possession, regardless of its owner’s guilt or innocence.
The pattern of the many examples Stillman cites leads the reader to conclude that in some jurisdictions, civil forfeiture is practiced in order to fund law enforcement budgets:
[...] civil-forfeiture statutes continued to proliferate, and at the state and local level controls have often been lax. Many states, facing fiscal crises, have expanded the reach of their forfeiture statutes, and made it easier for law enforcement to use the revenue however they see fit. In some Texas counties, nearly forty per cent of police budgets comes from forfeiture. (Only one state, North Carolina, bans the practice, requiring a criminal conviction before a person’s property can be seized.) Often, it’s hard for people to fight back. They are too poor; their immigration status is in question; they just can’t sustain the logistical burden of taking on unyielding bureaucracies.

Take a deep breath (especially if you followed the link and read Stillman's descriptions of the devastation to real people's lives caused by civil forfeiture practices). And, with a clear mind, consider local incentives to inflict civil forfeiture proceedings on helpless individuals against Snowden's description of what pervasive surveillance enables.

Quoting again from Snowden's June 9th Guardian interview, with ellipses to get us right to the heart of the matter:
... even if you're not doing anything wrong you're being watched and recorded [...] it's getting to the point you don't have to have done anything wrong, you simply have to eventually fall under suspicion from somebody, even by a wrong call, and then they can can use this system to [...] derive suspicion from an innocent life, and paint anyone in the context of a wrongdoer.
The heart of the matter, of course, is that it doesn't even have to be criminal or political. You don't have to be regarded by powerful authorities as a political 'problem' or a 'terrorist' to have your life ruined when your activities are recorded and maintained by government spies.

You might not even fall under actual suspicion. Maybe you just look like a juicy target.

What civil forfeiture in these United States tells us is that pervasive surveillance of the sort the NSA practices enables subjugation of average, innocent civilians by authorities who are motivated by ... budget cuts. Or call it greed. Or call it lust for power. You know, the kind of crooked timber that human beings are built from.

Are you worried yet?

The other cost of NSA surveillance 'techniques': destruction of the internet?

If the risk to individuals doesn't worry you, how 'bout the news that the NSA has been secretly undermining technology that enables trust between merchants and customers, and between participants in social media activity that powers huge sectors of the 21st century's economy, political dialog, and social activity? By "trust" I mean the secure knowledge that things I willingly tell or give to a business or person won't be pirated by a malicious actor who will then do me harm.

So-called "security guru" Bruce Schnier is a fellow at the Berkman Center for Internet and Society at Harvard Law School and board member of the Electronic Frontier Foundation. In a post on his blog, Schneier on Security, dated 5 September 2013, titled The NSA Is Breaking Most Encryption on the Internet ... well, the title pretty much says it all.

What does that title mean? It means that the secure connection that you use when you give your credit card information to a vendor, like Amazon or PayPal, is not actually secure. Surprise!

It means that the intricate, clever password you use to protect your on-line bank account or your 401(k) can't possibly be intricate or clever enough, because the secure connection you use when you type it in is permeable to bad guys. Whee!

See, it's not a matter of only the NSA being able to sniff out your credit card info. That would be creepy, yes; and in a civil forfeiture context, in which not just the NSA but the local sheriff might be able to sniff it out too -- that might be really creepy ... and materially risky as well.

The problem is that the NSA, we've now learned, has made it possible to break the encryption that protects your commercial transactions by subverting the standards on which most encryption technology is built. The encryption technology that everyone uses is weak because the NSA secretly gamed the system so the agency could play Peeping Tom ... with the unavoidable and completely foreseeable side effect that other, unknown, clever bad guys can exploit the same weaknesses.

Oh, I'm not saying I could do it myself (I'm not a clever enough geek, and I'm not a bad guy ... really, I'm not!). I'm not even saying the whole department of programmers with whom I work at UC Berkeley could do it. But, oh, how about an army of cryptographers hired by organized crime syndicates (pick your favorite here, I won't risk naming any...)? Or how about a literal army of cryptographers run by a national government?

If you're up for a lot of tech talk, you can get the geeky details of the NSA's insanely reckless subversion of internet security in On the NSA. This is a 5 Sept 2013 post on the blog A Few Thoughts on Cryptographic Engineering, written by Matthew Green, cryptographer and research professor at Johns Hopkins University (it's the post that launched a kerfuffle in which his academic dean first demanded that Green remove the post from the internet, then abjectly apologized for making that demand).

An alternative to this thickly techie post would be to read the news stories to which Prof. Green refers in the excerpt included below, summarizing those articles' revelations (TL;DR, for those unfamiliar with the meme, means "too long; didn't read"):
If you haven't read the ProPublica/NYT or Guardian stories, you probably should. The TL;DR is that the NSA has been doing some very bad things. At a combined cost of $250 million per year, they include:
  1. Tampering with national standards (NIST is specifically mentioned) to promote weak, or otherwise vulnerable cryptography.
  2. Influencing standards committees to weaken protocols.
  3. Working with hardware and software vendors to weaken encryption and random number generators.
  4. Attacking the encryption used by 'the next generation of 4G phones'.
  5. Obtaining cleartext access to 'a major internet peer-to-peer voice and text communications system' (Skype?)
  6. Identifying and cracking vulnerable keys.
  7. Establishing a Human Intelligence division to infiltrate the global telecommunications industry.
  8. And worst of all (to me): somehow decrypting SSL connections.
Back to Harvard's Bruce Schnier, in an article published by the Guardian on the same date (things were pretty busy on 5 Sept). Prof. Schneir, who reviewed many of the leaked documents himself, responds to the NSA's stunning betrayal by calling his fellow eggheads to arms:
By subverting the internet at every level to make it a vast, multi-layered and robust surveillance platform, the NSA has undermined a fundamental social contract. The companies that build and manage our internet infrastructure, the companies that create and sell us our hardware and software, or the companies that host our data: we can no longer trust them to be ethical internet stewards.

[...] I have resisted saying this up to now, and I am saddened to say it, but the US has proved to be an unethical steward of the internet. The UK is no better. The NSA's actions are legitimizing the internet abuses by China, Russia, Iran and others. We need to figure out new means of internet governance, ones that makes it harder for powerful tech countries to monitor everything. For example, we need to demand transparency, oversight, and accountability from our governments and corporations.
If you're not worried yet? I don't know what more I can type.....

And so....

What we've got is a military/industrial/security complex that is running off its rails. Just like President Eisenhower warned about half a century ago. It's putting individuals -- any and all individuals -- at perilous risk, and it's corroding key foundational elements of 21st century economic, political, and social life.

As Snowden said in June (video, 11'59" - 12'34") -- remarks for which he was unjustly ridiculed when he was just telling the plain truth -- we are perilously close to a situation in which:
...a new leader will be elected, they'll flip the switch, say that because of the crisis, because of the dangers that we face in the world, you know, some new and unpredicted threat, we need more authority, we need more power, and there will be nothing that people can do at that point to oppose it, and it'll be turnkey tyranny.
Turnkey tyranny.

Yup. We should worry about that.

This piece is cross-posted at Daily Kos.

Related posts on One Finger Typing:
Not your granddaddy's metadata: don't believe the PRISM anti-hype
Pimped by our own devices: electronica, the cloud, and privacy piracy
Unvarnished truth is hard to swallow

Thanks to Wikimedia Commons for the scary postcard image from turn-of-20th-Century Germany.

Thursday, September 12, 2013

A bicycle's trigger shifters collide with user-centered design

I work in information technology for a living, and I just bought a new bike. A hybrid. I mostly ride to get around town, mainly to commute to work and my gym.

I loved my old bike, vintage 1984, a much more rugged conveyance (heavy, fat and knobby tires) that came to me as a hand-me-down (hand-me-up?) from my younger brother. I've been riding it for more than twenty years. I replaced it only because the frame broke in two places, where the right-side chain and seat stays meets (met) the rear axle ... it must have been metal fatigue, 'cuz I rode the thing like a little old lady.

I've had my new bike less than a week, but I'm already smitten (plug: Mike's Bikes!). I could tell you all about it, but this very geeky post is going to focus on shifting gears. Bike gears, I mean. If you're not a bike geek or an IT geek -- or, minimally, some kind of geek -- I can't guarantee you're going to fathom why I'm spewing so many words on so focused a topic.

So. Shifting gears.

My new bike has index shifters, which is pretty much standard equipment these days. On my bike, the indexed shifters are operated by triggers -- little levers mounted on the handlebars -- that a rider pushes or pulls, hence they're called "trigger shifters."

The old-style friction shifters required a rider to move a lever just the right amount to cause the derailleur to align with the next sprocket up or down the bike's front crankset or rear cassette. Friction shifters often require a bit of post-shift adjustment to get the derailleur (and the chain whose position it governs) into just-right alignment with the desired sprocket otherwise the drive train rattles and chatters, and the chain might even slip, because it's not properly engaged. You might think of this as analog shifting.

The nifty, new index shifters are a good conceptual fit to this digital age. Index shifters are calibrated to pull in or let out a small, fixed length of cable each time the shifter is activated. These fixed lengths of cable move the derailleur over just the right distance, into just-right alignment every time. No further fiddling is necessary (or even possible, except by adjusting the cable tension: a tune-up procedure, not something a rider does while on his or her merry way).

The nifty, new index shifters that came with my bike are made by Shimano, which is the same high-quality manufacturer of my old bike's friction shifters. The model of the shifters on the new bike happens to be the Shimano EF-51; I believe that "EF" is an abbreviation for "EZ Fire."

So Shimano EF-51 trigger shifters have two triggers, which the rider pushes or pulls to shift ... that is, to move the derailleur, and thus cause the chain to move over to the next sprocket.

One trigger (activated by pushing from the rider-side of the shifter, generally with a thumb) moves a derailleur in one direction; the other, smaller trigger (activated by pulling from the forward side of the bike back toward the rider, generally with an index finger) moves a derailleur in the other direction. A push moves the derailleur so that the chain moves to the next larger sprocket. A pull moves the derailleur so that the chain moves to the next smaller sprocket.

If you're with me so far, you have a pretty good understanding of how bicycles work. Which would come in handy if you were to ride my new hybrid bike with nifty, new index shifters, because, IMHO, the way they work is counterintuitive, as we like to say in the world of "user-centered design" as applied to software, and especially to the web site design world. These worlds are where I make a living.

Counterintuitive? How so?

Here's a key quality of multi-geared bicycling that you want to keep in mind to understand what I mean:

On the crankset (the pedal end of the chain), using a bigger gear makes the bike harder to pedal than a smaller gear. On the rear cassette (rear wheel end of the chain), it's just the opposite: using a bigger gear makes the bike easier to pedal than a smaller gear.

[Another way of thinking about harder and easier in terms of bicycle mechanics is that the rider has to exert more force (pedal harder) when the rear wheel rotates further for each rotation of the crankset (= rotation of the pedals); the rear wheel rotates more times per rotation of the crankset when a larger sprocket is engaged on the crankset and as a smaller gear is engaged on the rear wheel. My apologies if that's just not helpful at all...]

So what all the verbiage boils down to is this:

For a rider, pushing the near/inside trigger on the right side shifter, controlling which sprocket is engaged on the rear cassette, makes the bike easier to pedal. And performing the same action on the left side shifter -- pushing the trigger that controls which sprocket is engaged on the crankset -- makes the bike harder to pedal.

Same rider motion (push the trigger), opposite experience for the rider (easier vs. harder to pedal).

This is not good, user-centered design. It's counterintuitive. Assuming I don't know anything about how bikes work, I'll expect the same effect (harder or easier) if I push the trigger on one side or the other of the handlebars.

Yes, of course I can get used to the peculiar way the bike's shifters actually work, but if user-centered design principals were employed in making these devices I wouldn't have to.

That's my point.

Is design that rides roughshod (as it were) over a bicyclist's expected experience actually necessary due to mechanical constraints? Well, keep reading, we'll get to that.

First let's take a closer look at what "user-centered design" means. Here's Wikipedia's definition, with my emphasis added in bold:
In broad terms, user-centered design (UCD) is a type of user interface design and a process in which the needs, wants, and limitations of end users of a product are given extensive attention at each stage of the design process. User-centered design can be characterized as a multi-stage problem solving process that not only requires designers to analyse and foresee how users are likely to use a product, but also to test the validity of their assumptions with regard to user behaviour in real world tests with actual users. Such testing is necessary as it is often very difficult for the designers of a product to understand intuitively what a first-time user of their design experiences, and what each user's learning curve may look like.

The chief difference from other product design philosophies is that user-centered design tries to optimize the product around how users can, want, or need to use the product, rather than forcing the users to change their behavior to accommodate the product.
(For a definition tailored to web site user interfaces, you can check out the definition of usability on the Neilsen Norman Group's site; NNG is usability guru Jakob Neilson's firm.)

So all this cerebration came to me on the very first day I rode my new, nifty bike to work, happily pushing and pulling shifter triggers willy-nilly to put the machine through its paces. Nice ride ... really, I enjoyed going to work. Imagine!

And yet: I wasn't the first person ever to notice the counterintuitive design of trigger shifters. The oddity I noticed has been discussed for years -- for example, in a May 2010 posting, excerpted below:
Jim, 05-21-10, 02:08 PM

This shifter is on my new 2010 Electra Townie 21D. I love it's simplicity, but get frustrated that the shifters function opposite of each other (ie. on the right shifter the upper lever is to upshift and the lower lever is to downshift - but on the left shifter the upper lever is to downshift and the lower lever is to upshift). I would have preferred the left to match the right. [...]

Wanderer, 05-21-10, 03:13 PM

Actually, they are both the same - clicking the shifters, in the same manner, moves the chain the same direction - toward the smaller sprocket, or toward the larger sprocket - in both instances....... start thinking in "sprockets" and all will be well.

Jim, 05-21-10, 03:33 PM

Yes, but using a larger sprocket on the rear is gearing down, and using a larger sprocket on the front is gearing up. Therein lies my conundrum.

mtnroadie, 05-23-10, 05:01 AM

It takes a little more force to go up to a larger sprocket. This is why the thumb lever is used on both the front and rear sprockets when shifting to a larger sprocket. Just ride it for a while. It will become second nature in no time.

Jim, 05-23-10, 04:51 PM

That's exactly what I assumed, after putting some thought into it. I spend more time going up and down the rear gears anyway. I'm sure I'll get used to it eventually. It's just kinda like when you get a new car and someone puts Reverse, down and to the right instead of, up and to the left where it has always been. Ugh.
To the question whether my shifters needed to be designed the way they are for mechanical reasons, "mtnroadie" claims that "It takes a little more force to go up to a larger sprocket. This is why the thumb lever is used on both the front and rear sprockets when shifting to a larger sprocket." Maybe. But I'm skeptical. Seems to me that's a problem that engineers have been solving with torque and appropriately lengthened levers for a very long time.

And I'm also thinking Jim shouldn't have caved so easily. It's not "kinda like when you get a new car." There's not an intuitively consistent place where automotive engineers put the reverse gearshifter position (though there may be longstanding customs observed by some or many manufacturers).

What's counterintuitive about the trigger shifters Jim and I find odd is that performing the same action on different sides of the same handlebars of a single bike produces different results -- from a rider's (user's) perspective.

Like I said: I'll get used to the shifters on my new bike just the way they are. The fact that there are nifty little numbers that show up in a nifty little window to show which sprocket is engaged means I won't be tempted (as I was on my old bike) to look down, away from the lunatic drivers careening-while-texting mere inches from my frail and vulnerable flesh, so I can watch which way the chain is moving. That's seriously nifty!

But it would be niftier still if a next generation of trigger shifter engineers took the experience of naive bike riders into account, and didn't design principally for gearheads.

I'd love to hear from the geeks who made it all the way to the bottom of this post in the comments below!! No spamvertisements, pleeze.

Related posts on One Finger Typing:
Bike parking fail
Sharrows and stripes: bike lanes for a common good
Fixing flat tires

Thanks to Keithonearth via Wikimedia Commons for the image of a bicycle drivetrain (adapted by the author of this post by adding a plain white background and converting to JPG format; anyone is welcome to use the adapted image under the terms of the original creator's license, Creative Commons Attribution-Share Alike 3.0 Unported).