Archive for the ‘technology’ Category

Memories of my youth: Technology progress

July 31, 2017

Times change. Technology changes. Most everybody looks at computing power as an example. If you want another example (albeit computer-related), look no further than our worldwide communications network, and how it’s changed in my adult lifetime.

1970: I am a young USAF lieutenant, based at RAF Mildenhall, in the UK. My mother, recently divorced and on her own, was going through a bad patch. How can I provide her some moral support? What about a phone call?

There was one phone in the BOQ that you could book international calls on. It was at the front desk, and normally behind the glass of the teller’s cage. To make a call, you first booked it with the international operator, who would call you back when the circuit was available. Then you stood next to the cashier’s cage, with the handset cord snaked out through the hole in the glass, and held your conversation while the rest of the world was cashing checks and paying for dinner. Cost was $1.00 a minute.

1980: I’m a USAF major at the Alert Center at DIA, in the Pentagon. There was an incident where a US carrier, enroute to a port visit in Yugoslavia, violated Yugoslav airspace on the way in to port. There was a discussion between the US ambassador, in Belgrade, and the National Military Command Center (NMCC) duty general on one phone line, and the NMCC duty general talking to the captain of the carrier on a different line. I stood by in case there was a need for Intelligence input. There wasn’t.

It was interesting, and exciting, to have real time communications halfway around the world, even if it had to be on two different phone lines. Based on recent reports, things haven’t changed in the cross-Department area.

1990: I’m a contractor, working on a then state of the art geographic information system, installed in the Alert Center at DIA. It’s the start of the Kuwait war Scud missile attacks and I’m helping chase them. A missile would launch from Iraq, and the plume would be detected by a satellite in geosynchronous orbit. The satellite would radio the detection to the ground site in Colorado, which would report it to the NMCC in the Pentagon. DIA was also on that circuit, and we’d input the launch coordinates to our database, pull up possible hiding places, like bridges and overpasses, and send that on to the Scud Cell in Saudi Arabia. They’d pass the data to the F-15E’s and the fighters would try to find the launchers.

What with intermediate hops, the signal had to travel a good 120,000 miles, from detection to target assignment.

1997: Meanwhile, Cordelia has her own wireless phone. All you have to do is pull the antenna out.

2016: I am a college professor enroute to a conference in Hokkaido, Japan. While travelling along at 90mph on the Shinkansen bullet train, I call my brother in Utah on my pocket phone. The next year he returns the favor by calling me from Graz, Austria, on his phone, using full motion video.

 

Advertisements

It’s Not A Robot

July 9, 2016

So the (lone, untriangulated) Dallas shooter is dead, killed in what is widely heralded as the first combat use of a robot in the US. Only, it’s not. A robot is:

… a mechanical or virtual artificial agent, usually an electromechanical machine that is guided by a computer program or electronic circuitry, and thus a type of an embedded system. Robots can be autonomous or semi-autonomous … (Wikipedia)

Our modern era has an unfortunate habit of using cool words in ways that redefine their underlying meaning. For example, ever since Star Wars, android has been used to mean any mobile robot, instead of a human-seeming one. And robot is used for any mobile telepresence device.

Remote-presence EOD machines have no autonomy. They can’t. You don’t want them to. You want them to be under precise human control at all times. Their job is to be manoeuvred into position next to a suspicious bag of groceries by a human handler, so that the human handler can (for example) set off a small explosive charge that will detonate the main charge (or, more likely blow somebody’s dinner across the parking lot). They are the modern equivalent of a bomb-onna-stick, cousins to the Bangalore torpedo or the self-hoisting petard.

So, yes, the use of a remote-presence EOD machine to deliver a lethal payload to a human target is a first. It is not a harbinger of the rise of the robot killers.

U.S. Economic Growth 1750-2050 Part 1

September 25, 2012

There’s a paper over at VoxEu* which postulates that US economic growth is coming to an end.

The paper is deliberately provocative and suggests not just that economic growth was a one-time thing centred on 1750-2050, but also that because there was no growth before 1750, there might conceivably be no growth after 2050 or 2100. The process of innovation may be battering its head against the wall of diminishing returns. Indeed, this is already evident in much of the innovation sector.
Robert J. Gordon, 11 September 2012

I have some issues with this, but not because of a knee-jerk “that can’t happen to US” response. It obviously can and must happen, if only because of thermodynamics. Our ability to provide energy to our global economy is going to hit a wall sometime in the next 300-400 years. I’m not talking about peak oil, I’m talking about some combination of having to cover the earth in solar panels vs raising the surface temperature to the boiling point.

On the surface, Gordon makes an interesting case: that we’ve already cleared the technological low-hanging fruit, so that future productivity gains will be harder to come by; meanwhile, structural issues in US society add additional headwinds that will help drag our performance back down to colonial levels.

I guess my initial problems with this paper stem from three issues: he’s given up on the computer revolution too soon, he’s ignored some new technology that will have a major impact on productivity, and he’s identified a start point, but not an end point for the productivity drop.

First, the computers. No. First, the technology. Technology always takes longer to have an impact than its inventors realize. As I tell my students when we talk about bringing new technology into a firm, assume that it does exactly what the vendor says it does; what else has to work in order for it to be successful? My favorite example is frozen food. Clarence Birdseye invented flash freezing of food in 1922, but frozen food didn’t become an American staple for over thirty years. What happened? What else had to work? Refrigerated trains for distribution, glass topped freezer displays in the general store for sales, home refrigerators with freezer compartments in place of ice boxes. It always takes longer. Computers are a post-WWII phenomenon, and general purpose business computing really only started with the IBM 360 in the 1960’s. Fifty years later, we are just on the verge of ubiquitous computing — as Cory Doctorow says, a hearing aid is a computer you stick in your ear, and a car is a computer you sit in.

As for new technology, I see nanscale engineering and 3D printing as game changers that will have major impacts on how things are done. Nanoscale materials, so far, are giving us self-cleaning cloth, paint, and glass; medical drug delivery techniques, and new ways of embedding computers. 3D printing will let us have the cost advantages of mass production combined with the customization potential of the job shop — single unit mass production. Are these advancements equivalent to building the railroads across the West? Probably not, but we won’t know their true impact until, say, 2050.

Third, Gordon claims his ‘headwinds’, about which more anon, will bring US productivity back to colonial levels before that date, 2050. But he doesn’t say what might lie on the other side. The structure of US society isn’t going to freeze at that point, and some of the drags on productivity (such as old folks like me) will be starting to ease up (OK, die off) by that time.

Now, he has an interesting graph, possibly the best labeled and easiest to understand of the lot, showing the change in actual and predicted levels of GDP per capita in constant dollars, 1300-2100.

GDP/capita 1300-2100 Source: Gordon, CEPR Policy Insight No 63.

What makes the graph compelling is that it’s a classic S-curve, headed for a rolloff at about $90K/capita sometime shortly after 2100. What makes the graph less than compelling is that there’s no indication we are actually past the inflection point, and until one passes the inflection point there’s no sure way of predicting what that upper bound will be.

Next week, I’ll address some other issues. Like, does it matter? Is it good news?

Did I say next week? I meant “Next time I have a chance to work on this”.

—————————–
* A web portal operated by the European Centre for Economic Policy Research

Chihayafuru Lasertag

April 17, 2012

Over on Instructables there’s a description of how to make your own mechanical Karuta practice opponent using lasers. How cool is that?

The idea is, you have cards marked with both machine readable code and human readable characters. The machine uses a webcam to pattern-match a line of text to a specific card. The human uses brains to pattern match the spoken (via a text2speech tool) text with a specific card. The human points to the card with their hand. The machine points to the card with a laser. If the human is faster than the machine, they can flip the card out of the way. If the human is slower than the machine, they get a laser burn on the back of their hand. This is called reinforcement learning. Based on the embedded vid, the machine is really fast. While the machine doesn’t act like a real human, it does give you the advantage of actually pointing to the card (so you know where it is if you couldn’t find it), or not moving at all (on one of the 50 dead cards).

The Instructable itself leaves a little bit to be desired. The key to the whole process is a flow-chart that is stuck in the number two position. The actual building of the tower and the assembling of the webcam and laser are not mentioned. I guess if you are an Arduino hacker who feels comfortable with circuit boards already this is not much of a problem. If you are an Ikea-challenged fumbler like me (I give a whole new meaning to the term “hacker”), it’s a little problematic.

As an alternative, I’d like to see someone come up with a Dance-Dance style practice board. You’d have a board, with pressure sensors in a 6 x 9 grid. When the machine reads a line, it starts a timer. When you hit the card, the timer stops, and you get to see how fast you were. You get to use real cards, not ones that are half QRC code, and you can collect statistics on how well you are doing, and what cards need more work. Plus, it’s kinder on the hands.

Technology Dependence

September 15, 2011

Our story so far. One of my two standard ratio 4:3 monitors is dead and I really miss my dual-screen setup — maybe not as much as I miss Buffy, but right up there. I can’t use the spare off the SimBox as one of the pair, ’cause it’s a wide-screen, and will make my eyes go all wonky o_O. I could move it onto my desk and plug the remaining standard monitor into the SimBox.

No I can’t. The SimBox is dual DVI outputs. There’s a VGA, but it’s disabled. I guess I have to spend some money.

Do you know how hard it is to find a standard 4:3 ratio monitor these days? All the stores are carrying wide screens. I could order from the ‘zon, but that would take days. Sigh, I guess I’ll just have to make do with a dual-monitor wide-screen setup [ __ ][ __ ]. Turns out they can be had cheap at CostCo, if you don’t demand sub-millisecond response rates – and I don’t have the twitch responses to be able to use that rate anyhow. A quick, late night run downtown, drag the new box to checkout while the lights are flashing and the “we are closing” announcements are blaring; take it home, and pair it with the one off the SimBox. Then we move the old VGA over to the SimBox and hook it up through a newly-purchased DVI2VGA converter plug. There we are. All done and dusted and it’s not even midnight yet.

So, how’s the new setup coming along? Feels strange. It’s very comfortable to have my dual monitor setup back, and the wide screens mean I can spread out stuff even more. But that’s just the problem. Instead of being 20″ from one side of the display setup to the other, it’s now 40″. Items that were right here are now way over there. There’s a lot more mousing around to do.

I expect to get more comfortable with this setup as time goes on, but it gives me a new appreciation for how simple things can change the user experience.

Technology Dependence

September 13, 2011

It’s amazing how fast we adapt to new technology, useful technology, in our lives. A few months ago I had cause to buy a a second monitor for my main PC — I teach MIS, so older boxen litter the house like dust bunnies, but there’s one I do most of my work on. The others are used for specific purposes, like demonstrations, or running simulations, or MS Windows. None are cutting edge.*

I had been suffering monitor envy, reading about all these dual- and triple- monitor setups. When we got our tax refund, I decided my share would go to an upgrade. Actually, what it did was allow me to buy a cheap widescreen for the SimBox, and move the existing monitor in beside the relatively newer one that came with my main box.

So, now I had two 19″ standard monitors, side-by-side [_][_] , each sitting atop an upside-down flowerpot. Whole new worlds opened up. Opera browser on this screen, e-mail and tweet stream on that screen. Never again would I have to Control-Tab just to see what was going on while I was working. Never again would I have to stack up Impress and Calc and Opera (oh, my) and spend as much time flipping as I did building a lecture. Never again would my students have to wait whole minutes, minutes I tell you, for me to respond to a tweet.

Then, disaster. One of the monitors went into flicker mode, and within minutes was deader than korfball.

Fortunately, it was still under warranty. I called ViewSonic, and the nice man walked me through the usual troubleshooting, including a “unplug, hold power button for 30sec” trick that was new to me. Results – nada. All was darkness. I got a RMA number and a mailing address and was on my way. Well, except for the fact that I was still monitorless, and would be until I send it off, they work on it, then send it back. Say a week after Wenceslaus Eve.

And herein lies the point of this essay. I’ve only been using dual monitors for half a year, but I am already dependent on them. A single monitor drives me crazy — everything is hiding behind everything else. There’s no room to spread out. I can’t have my references open alongside my work. I’m starting to feel claustrophobic.

But I have a Cunning Plan. More anon.

———————–
*Some years back, the Association for Computing Machinery found that, unless you need the cutting edge features on a next generation, say $4000 machine, you were better off going with a last generation $2000 machine, using it for two years, and then buying the no-longer-cutting-edge formerly-next-generation machine for $2000. You waited two years for the functionality, but you had an additional $2000 in the bank.

User Interface Design – an Amazon Fail

August 9, 2011

You’d think that a company as big and as dedicated to the online world as amazon.com could avoid doing dumb things with the design of their user interface. Turn’s out, that’s not true.

Don’t get me wrong. I think Amazon does a great job of user interface design, mostly. It’s just that their latest adventure, revamping their shopping and wish lists, has turned seriously pear-shaped.

The user interface is, as Brenda Laurel says, all there is. (more…)

Time Marches On

August 3, 2009

Working in a technical field I am still amazed at how fast technology is progressing. I am not particularly gadget happy, so I don’t need the latest, and I never buy cutting edge. That means I tend to hang on to gear for at least one more product cycle than the rest of the world does. And that means I am always surprised at how the rest of the world has moved on — there’s knowing something intellectually because you read about it, and there’s knowing something because you find you can or cannot do something.

For example, I bought my old home PC eight years ago. It was a mid-range Dell, 800MHz PIII, with a 20GB HDD, running Windows NT. I bought my wife’s old PC a month later, and it was a 900MHz PIII with a 76GB HDD. We both upgraded a couple of years ago — me to Ubuntu, and her to a MacBook. The two PCs were kept for those things that absolutely required Windows. Recently they both started feeling poorley — a keybounce on the power button rendered mine unbootable — so we replaced them with refurbished Optiplexes from Amazon. It’s surprising how good a deal one can get on a used machine.

The point of this story (finally) is that I planned to use one of the old PCs as the start of a home media system. After all, that’s what everyone talks about, right? Repurposing an old PC by installing Ubuntu and a TV card. How hard can it be? So I went on the web to find what the latest was. Hauppage still dominates the field, but there’s a problem. It seems that there’s not a lot of choices out there for PCI cards, and all the software wants 1GB of RAM or more. Well, our two systems started out with 256MB each, and combining them still gave us only half of what was needed, plus, even the “high end” one is underpowered. So, I guess I’ll start slow and build something for our CDs and see where it goes from there. Meanwhile, anybody need a 0.8GHz machine with no memory and a bad hard drive? One that hasn’t been dusted for eight years?

I go two-headed

July 11, 2009

My wife and I have been playing musical monitors. We saw a nice 23″ Samsung wide screen monitor last week, and I just had to have it. After I got it on my desk, I decided it was a little much, compared with my 19″ ViewSonic 903b. So, my wife graciously accepted it as a gift, and gave me her 19″ ViewSonic 912b. Plugged them into my linux box, configured them, and am now learning their little idiosynchs. For some reason, I can’t get them to mount as two separate screens – the Nvidia card wants to make them one big screen, with the mouse jumping from one to the other as you slide it across. That’s not bad, but it does change your windowing habits – it’s a little disconcerting to have 3/4 of your email message on one screen, with the RH 20 characters on another. So, what’s required is that instead of maximizing a window, I adjust the size to fit a single screen, and then drag it to whichever monitor is appropriate. That doesn’t keep the Opera bookmarks from spilling across the margin, though.

Right now, I have the left hand one set up as my main screen, with my working window for Opera. The right hand one is set up with my email, and with Twitter open in a separate Opera window (not tab). That way, when I use the KVM switch to move to the XP box on the LH screen, the RH screen still shows my comms flow. Gotta stay in touch with the flow.

One of the nice things about Opera is that you can tell it to refresh a page every x minutes, so my tweet stream updates every 15min in the RH page, and the email checks every ten. I understand FF has a plugin for that, and I don’t know what IE does – probably assumes your servant will come in and press F5 as needed.

One interesting thing is the different color balance between the two monitors. The 903 (left hand) shows the Ubuntu background as earth-toned brown, while the 912 is distinctly orange. I ran the color setups on both, and both are set to 5600 somethings (probably degrees).

Bing

June 1, 2009

Or should it be ‘bling’? What’s with the balloons?

MS’s new search engine. Returned mostly good answers on my standard questions, though it didn’t catch the page on who first used the word ‘filk’. The ‘edge of chaos’ stuff was split pretty evenly between Kauffman and the MMORPG.

My take is that it’s on par with Google, but that the big ‘G’ has the approach down better — when I’m searching, I don’t need popups or hoverups, or cutesy pictures of balloons. I want a minimalist interface and the best results.

Bling will be a Google-killer only to the extent that MS near monopoly on the desktop can entice captive IE users to go there.

Cuil Search

May 26, 2009

As a follow on to the previous entry, I recently came across the semantic search engine, Cuil.

http://www.cuil.com/

Did ever so much better than W|A on the search terms I used, and produced a more usable set than did the Goog.

Popped up the right names for reconstructability analysis and edge of chaos, picked up on Wolfram and cellular automata. Got the dates right. Got good stuff on handbells and filk, and even came up with the first use of the term ‘filk’ music. Apparently by Karen and Poul Anderson. I like it.

Wolfram Alpha

May 16, 2009

Just tried out the public alpha version of Alpha. It’s interesting, has potential, but is no threat to kartoo.com

Didn’t know:
handbell
click fraud
chaos theory
edge of chaos
neural net
filk

Minimal info on:
server
chaos

Useful stuff on:
Spokane, WA (but not on the river)
Princess Mononoke

Way too deep too fast on:
cellular automata (Wolfram wrote a 12lb book on it)

Didn’t match April 11, 1970 with Apollo 13
Didn’t match Sep 2, 1945 with end of WWII