FOAM TOTEM


Foamy
Christmas Radio
[high] [low]
Last update:

Intervention Needed

I am vexed that no one has made an editor with a feature set comparable to Brief while at the same time not being so overbloated with useless crap that it's practically unusable.

Here is the list of things editors seem to consistently fail on now. These aren't hard.

  • Brief-like keybinds for the number pad should be possible. You'd think the standard general key remapping built into 90% of editors would work for this, but they often don't read the keypad properly or fail to expose the commands necessary.
  • Undo stack must contain cursor movements (or be configurable to do so). When I hit undo, I literally want to undo the last state change I made to the UI. This is critical for good file navigation.
  • Selection should be a mode one enters and exits. CUA-style (shift+arrow) means that you cannot search while setting a selection. Typical usage: hoisting code fragments, indenting/outdenting blocks, etc. You aren't always sure where the end is, so you mark the top and then cursor around or search to find the end.
  • Columnar selections that effing work like god intended. Inserting a column block MUST move the cursor under the block. So, if you hit "Insert" 3 times, you've moved vertically through the file. (I'm looking at you, Notepad++ and your Scintilla brethren.)
  • Bookmarks. Numbered bookmarks. I don't need or even want names, or any other special crap. Let me drop bookmark 3, and jump to it with a low keystroke count.

Most of the other things: syntax highlighting, regex searches, etc are pretty well covered by most editors. There are other things I like but aren't absolutely required (setting where the cursor goes with \c when you do a regex search, keyboard macros, tab to spaces to tabs, etc).

It's the usability side of the things above that get me every time I try to hop editors. I've pretty much defaulted to Visual Studio's editor (since it has a great debugger and all that for C). I'd estimate I'm only 80% efficient when coding because of the stuff above. I still use an ancient copy of Codewright for plain text editing because it has these features, but it's getting long in the tooth: the file dialog boxes malfunction sometimes.

The reason I wrote this was to avoid installing Vim or Emacs. Vim I could probably steel myself to use (if it was capable of the above). Emacs is certainly capable, but I don't fancy a climb up Everest to edit some effing Javascript. Save me!

Nymwars!

Google should make a full reversal on pseudonyms. It's one of those situations where the developer can get it totally wrong at first and still come out on top. (MMO devs run into this situation more often than they might prefer.) The debate itself shows a demand for innovation. This is a huge opportunity to provide a more robust identity paradigm and set the standard for the future. There are many well-written articles that outline some good approaches. Few of them appear particularly difficult technically (or socially).

Sticking to their guns on this would show an sad lack of vision, I think. They should grab the chance and make it a product-defining characteristic of G+.

It Turns Out OnLive IS Impossible

As a follow-on to the Steve Perlman talk, I checked out OnLive. OnLive lets you play games on your computer even if your computer is too weak to support the game. It basically runs the game on a server out in a server farm somewhere, and sends you the video. There's no need for you to install the game or anything. It also has a variety of community features, like being able to watch other people play the game, give them cheers, friend them and so on.

I had first heard of them maybe five years ago, and my thought was that it would never work. More specifically, the round-trip time between the computer and the server would be high enough that everything would be a bit laggy. Most (good) games already do a lot of work to decrease the time between input and output, adding 100-350ms of latency is an eternity. I flipped the bozo bit on the idea and moved on.

Five years later, I see a video of someone using it, and it looks pretty slick. I'm intrigued, so I decided to check it out. Basically, it has all the problems I expected that it would. That said, it seems to work nearly as well as physics allows it to, which is pretty impressive.

Latency
Latency is apparent and obvious, even on games where it doesn't matter, like Puzzle Quest. That said, I'm impressed it was as low as it was. They must have some pretty fast video compression stuff on the servers. I can't imagine that an FPS is very playable on the system, though.

Video encoding artifacts
The video encoding artifacts are evident and obvious. It's pretty good by the standards of TV, but games for PCs are generally made to take advantage of things like legible, crisp text or texture patterns. They get a bit smudged by the compression. Games were playable though. I doubt it'd be usable for business apps, though (as he suggests in his talk).

iPad Video is even worse
I also tested this on my iPad (which has an OnLive client-- though I couldn't play any games on it) and the image was flat-out unacceptably fuzzy. (I suspect the iPad version is actually a video of a remote server running the OnLive client, which is then going to another server for the game. On a desktop, you run the client yourself.)

Network congestion
And during my one hour play time, I failed what I was doing twice because of network congestion or some other glitch. It eventually recovered but the video can get scrambled or vanish, which can be disorienting when playing a twitch game. Not a big deal in FPS games where you die 100 times, but if I was click-dragging items around in Maya or some other real app, that'd suck to the Nth degree.

None of these these things are really their "fault". In fact, it works a ton better than I ever thought it would. It shows, however, an inherent weakness to the Cloud Everything approach.

Is DIDO Radio World-Changing?

Recently, Steve Perlman gave a talk at Columbia on what his company, Rearden, has been up to the past few years. Perlman thinks that not enough true invention is being done any more. Venture capital expects payoffs in the one to three year timeframe. Innovation and invention can take a lot longer than that. SO he created Rearden Labs to do this kind of invention. Several are discussed in the talk, but the one that really caught my eye was DIDO radio.

Not much was said in the presentation except that it was world-changing, and awesome, and that the "big guys" didn't want to use it even though it's better in every way. There was some potential for hyperbole. That said, I was intrigued because the basic approach sounded clever. It also has the potential to solve the "last-mile" problem in rural areas (where it's really a "last-twenty-miles" problem).

The presentation didn't go into hardly any detail and I know almost nothing about radio so this isn't too surprising. I did some research on it today to see just how awesome it is. I read a whitepaper Rearden posted DIDO radio. It didn't give quite enough for me to know what was real and what was hyperbole, however. However, they've been working on this for a long time and the whole thing is laid out in their patents.

The executive summary is that it’s still pretty cool, but not ultra-mind-blowing-cool as far as I can tell. There’s an enormous amount of prior art that this is founded on. WiMax and LTE do stuff like it (but not exactly it). The DIDO patents actually cover the difference between MIMO and the new stuff explicitly. (Imagine! They actually teach as they are supposed to!)

My Summary
In comparison to the current state of the art which also uses spatial differentiation to send multiple signals on the same frequency, they removed the need for multiple antennas on the client side, effectively replacing them with computing horsepower at the base station. Current art has each client train against each antenna to see how it "hears" it, and then uses this information to decode whatever crazy waveform it receives later and extract just the one it needs. (It gets a crazy waveform due to propagation delay differences from different towers, echoing, attenuation, etc.) In DIDO, each client does the same basic thing but sends this training information back to the base station. When constructing the final waveform for each antenna, it convolves all the signals together using this information. The result is that when they all arrive at a particular client, the only signal it should "hear" is its own. The client doesn’t need to do much work since it’s pre-compensated.

The advantages of this approach are that the client radios can be smaller (apparently, the current art requires multiple antennas for reasons I am not clear on) and much simpler since there’s no big calculating that needs to be done.

The number of simultaneous streams of data that can be sent is limited by the number of spatially distinct antennas at the base. (They are preferably at least 1/2 a wavelength apart according to the patents.) If you have more clients than antennas, you time-multiplex them.

I don’t understand how upstream data is sent in this system, or how the training information is determined. Apparently, this is meant to be obvious to one "skilled in the art" as they are just jargon in the patent.

So, probably a great improvement over the state of the art, but probably not entirely earth-shattering to those in the industry.