31 December 2009
29 December 2009
Vanity aside, you might find Kate Greene's piece in Gizmodo interesting: "The Hunt for the Perfect Screen."
07 November 2009
04 November 2009
EXAGGERATE (#22)Imagine a joke so funny that you can't stop laughing for a month. Paper stronger than steel. An apple the size of a hotel. A jet engine quieter than a moth beating its wings. A home-cooked meal for 25,000 people. Try exaggerating your idea. Think big: what if it were a thousand times bigger, louder, stronger, faster, or brighter? What if the number of people who could use it were increased a thousandfold? Now think small; what if it were only one-thousandth as powerful, fast, costly, or complicated as before? How can you exaggerate your idea?
22 October 2009
01 October 2009
06 September 2009
29 August 2009
a) suspending a seat for supporting a user between only two chains that are hung from a tree branch;
b) positioning a user on the seat so that the user is facing a direction perpendicular to the tree branch;
c) having the user pull alternately on one chain to induce movement of the user and the swing toward one side, and then on the other chain to induce movement of the user and the swing toward the other side; and
d) repeating step c) to create side-to-side swinging motion, relative to the user, that is parallel to the tree branch.
2. The method of claim 1, wherein the method is practiced independently by the user to create the side-to-side motion from an initial dead stop.
3. The method of claim 1, wherein the method further comprises the step of:
4. The method of claim 3, wherein the magnitude of the component of forward and back motion is less than the component of side-to-side motion.
28 August 2009
Maybe it’s in his genes or maybe it was from his environment, but Gregg Favalora feels he was destined to work with optics and imaging. In fact he has been working with 3-D imaging since he was in junior high....
01 August 2009
24 July 2009
- A population of organisms (usually called chromosomes), each chromosome being composed of a bunch of genes... and each gene can be as simple as a 0 or 1 -- or as complicated as a mathematical function like Sin(x). Usually the population is random, a bunch of Frankensteins.
- A goal to achieve, like "figure out what math function best approximates this stock-ticker," or "figure out how to place 100 Lego blocks to make a really long bridge that doesn't collapse under its own weight." Usually the goal is presented in terms of a test called a fitness function. We test each organism (or chromosome's) ability to pass the test. The chromosomes that do better survive - pairs of chomosomes (parents) breed, make children, the kids are mutated, and the weaker parents are killed. Repeat.
- And you need to let the simulation know how likely it is that genes will mutate or that breeding parents will swap chromosomal segments ("crossover").
- g-fav's first attempt to write a GA, in C (May 2007)
- "virtual schadenfraude" - turn on your speakers! (Oct 2008)
- Lego bridges and truck backer-uppers (May 2009)
04 July 2009
28 June 2009
26 June 2009
- Building an advisory board
- Choosing among LLC and Corp.
- How to Pitch and not be Screwed By VCs
24 June 2009
10 June 2009
04 June 2009
29 May 2009
28 May 2009
22 May 2009
21 May 2009
14 May 2009
Hey techie entrepreneurs, here's something for you to consider registering for...
Leading New England tech / business journalist Scott Kirsner wrote regarding the next step in his efforts to keep the best minds within Massachusetts and figure out where the next waves of growth are coming from. He describes his "What's Next In Tech: Exploring the Growth Opportunities of 2009 and Beyond" event as:
The idea is to provide a picture of the tech clusters that are going to drive the next waves of growth here in Massachusetts, from cloud computing to robotics to videogames to energy efficiency to social media. Speakers include venture capitalist Bijan Sabet from Spark Capital, iRobot co-founder Helen Greiner, Brian Halligan of HubSpot, and Tim Healy, who runs the publicly-traded EnerNOC. (Note: The early registration rate ends on May 15th -- tomorrow.)See Scott's blog posting on the event (signup).
What do I think is "next in tech"?
Several folks who've answered this question focus on the leading edge of Web and mobile technologies ca. 2009, e.g. cloud computing, social networks, mobile advertising and tracking, etc.
- Medical devices with a stronger software component. An example is retrofitting ultrasound machines with hardcore computer vision algorithms to assist the clinician in performing biopsies or interventions (e.g. my bias towards prostate brachytherapy). These fall within "intraoperative planning and guidance" - there's a huge opportunity to make money by figuring out how to track defomable organs, like the brain or liver, to tell the surgeon where the tumor is right now. Companies like Medtronic mostly focus on the 20% of the body that's rigid, e.g. orthopedic surgery. That's the easier problem. It's time to help the other 80% of the body.
- Imaging, i.e., the capture and processing of light, e.g. cameras. This won't be news to you in computer graphics, but keep an eye on the labs of Ramesh Raskar (and his "imaging ventures" class co-led by Joost Bonsen), the various Stanford Graphics Lab research efforts in light fields, and Shree Nayar's work at Columbia's CAVE group. Examples: cameras that you can focus after you take the picture and return home, removing blur from scenes, seeing objects from locations you didn't take a good enough picture of... Graphics processor performance has gotten extraordinary enough that consumer-grade cameras will be capable of extraordinary things in a few years.
- Display, particularly 3-D. Yeah, sure, this is what I spent my conscious life working on, but there's really something here. Stereo cinema has exploded. RealD, one of the leading providers of stereoscopic cinema technology, claims to have over 1,600 screens worldwide. This opens opportunities in: camera technology, editing / direction software, secure transmission and playback, glasses, projectors, and other areas. Further, we might tire of our 2-D desktop displays for a more holographic experience. Many organizations have developed technology that can project real, 3-D, "look-around" imagery in front of screens (or above tabletops, Death Star-style!). The day will come.
- Advanced toys. That's all I'll say on that one.
- Systems that begin to mimic natural processes like "emergence," "swarms," and "genetic algorithms," everywhere from automated mechanical design to distributed processing to more realistic videogame AI.
13 May 2009
27 April 2009
A few months ago, photographer Sara Forrest profiled me as a part of a collection of pieces on various entrepreneurs and a bit of our personal side. It is running in this week's Computerworld magazine, viewable online. "The Grill: Gregg Favalora talks about 3-D imaging breakthroughs."
At the moment, I'm back at the office ingesting liquid and solid caffeine in preparation for a class that Joost Bonsen asked me to participate in this Wednesday night at MIT: MAS.964, Imaging Ventures. (Two hours! Whew.)
It was around this time of year back in 1997 that Joost was responsible for pushing me into their business plan writing competition, which led to my leaving Harvard, which led to starting Actuality. Last week, we began the process of winding down the company. So this is a good way to formalize a lot of my recent introspection about product development and entrepreneurship.
25 April 2009
15 April 2009
- Optical tables
- FireWire cameras
- Collections of optics, like singlets, prisms, etc.
- Translation stages
- Other cool stuff, like SensAble Phantom OMNI haptic interfaces
08 April 2009
20 March 2009
I came back from tonight's NES-OSA meeting at the Media Lab, which was a real joy. We learned about Ed Boyden's optically-throttled neuron experiments (in which a virus genetically alters mice embryos to make their brains optically-sensitive), Ramesh Raskar's work in computational photography (including deblurring using "fluttered" shutters), and Michael Bove's progress in holographic video using surface acoustic wave modulators.
Been storing up a few interesting links for my faithful readers:
(movie clip) Dan Dennett's TED talk regarding a Darwinian view on why things are funny, or sweet, or cute, or sexy. The explanation for "funny" was the most surprising to me.
(blog) Kevin Kelley's blog, "New Rules for the New Economy," lately discusses various swarm-related ideas. He's got an RSS feed.
(utterly random nerd humor) From the people who brought you the faux-scientific educational series "Look Around You" is this very brief clip, "The Helvetica Scenario."
Like some other forms of stimuli that need to become progressively more extreme to elicit a reaction, I wonder if there's an analogy for humor. I think my sense of humor is really getting stretched towards the increasingly bafflingly bizarre.
Will this reach a limit? Next year, will I only laugh at things that are a collection of non sequiturs? I remember back in fifth grade, all it took was a good Monty Python episode. Then, in grad school, I needed something along the lines of those parody GI Joe public service announcements, overdubbed with nonsense. Then came the "Retroencabulator," followed by "Look Around You," which is a sort of meta-humor that pokes fun at how utterly arbitrary the scientific method must seem to non-scientists, and then Matthias had to up the ante and direct me to "You Look Nice Today," which I find funny for its stream of utter falsehoods passed off as obvious truths.
What's next, I wonder? :-)
(slapstick) Well, maybe it comes full circle. Here is a video collection of talking cats and dogs.
16 March 2009
I am excited to announce that the New England Section of the Optical Society of America has teamed up with the Boston IEEE Lasers & Electro-optics Society to present an evening of talks with three world-class researchers:
Info: Three Talks at the Media Lab (takes a few seconds to load)
"Computational Photography" with Ramesh Raskar
"Optical Brain Control" with Ed Boyden
"Toward Consumer Holographic Video" with Michael Bove
When: Evening of Thurs. March 19, 2009
Deadline: To join the dinner networking pre-event, RSVP *today, Monday.*
For those of you who haven't been to an NES-OSA meeting before, this is a great way to meet other people in the optics community. Also, there should be sign-up sheets for people who want to join.
ps Remember to RSVP now.
23 February 2009
Following two small self-educational / art projects here using Wolfram Mathematica (see through bushes! funkify your portrait!), a Wolfram representative wrote me to let our readers know that you can get a fully-featured license to Mathematica 7 Home Edition for just $295.
This is a good deal. You can learn more about it here.
By the way, we use Mathematica 7 (I defy you to click that link) at Actuality, and have noticed some great enhancements over v6.
For example, it now runs wonderfully on my 2004-vintage IBM ThinkPad T41 - whereas, in the past, 3-D visualization would close the program. It seems to take advantage of multi-core processors. Also, it comes with new image processing functionality that makes it easier to... aw, heck, check it out for yourself. If you use the professional edition at work, I think it comes with a home-use license too.
18 February 2009
Today I'd like to share progress on my efforts to learn about "synthetic aperture photography," a branch of computational photography.
Jenn and I drove up Mass. Ave. in Lexington (i.e., Paul Revere Lexington), put a video camera out the window, and filmed the buildings whizzing by.
Here is a typical collection of frames of, say, a school (or apartment?) blocked by trees:
After picking, say, 16 successive movie frames, you can stack them up, shear them, and slice them to simulate having a camera that's 20 feet long or so. Why? This manipulates the data so as to create the final image of the building with the trees removed:
This won't be news to those of you in the field of computer graphics since, say, 1997, but there's a lot of activity analyzing optical systems - such as large-lens or multi-lens systems - for benefits in graphics. Marc Levoy, a professor at Stanford, held a class on a collection of topics he called "computational photography." Today, the field, building on work in computer vision and 3-D display, includes:
Snapping a photo, and not worrying about focusing it until later. E.g., when you get home.
Visualizing 3-D scenes from angles between those you've taken pictures, such as the post-inauguration Microsoft / CNN project.
Taking many snapshots of a scene - say of a building obscured by bushes - and then "erasing" the bushes to show what's behind.I've been attending talks, flipping through papers, and watching colleagues try their hand at this burgeoning field.
How It's Done
(1) Take video at constant velocity along linear track, (2) collect frames into an array - a spatio-perspective volume - and recenter synthetic film plane on the region of interest, and (3) average over all fames & display it.
I like Mathematica 7 for its functionality and elegance (though not its memory management). We took a few minutes of video & chose some few-second clips using our MacBook's video editing software. Exported it as a QuickTime movie, and from QuickTime Pro, exported an AVI. (For some reason my Mathematica won't play nice with MOV.)
Import a good 16 or 32-frame segment into an array of images. I chose frames that showed a buliding in the background with plenty of occluders, minimal vertical jumpiness (remember, we were driving), and relatively constant speed. To save memory, cropped the images to a horizontal region. Mind you, the linear camera motion doesn't need to be at constant velocity, but it will make your life easier if you'd like to automate the process.
Verify the constant velocity by viewing a 2-D slice of the 3-D "spatio-perspective volume," an informal approximation of an "epipolar-plane image," as Bolles and Baker called it in the 1980s.
What's that? For the purposes of the blog post, it is a 2-D image with time occupying the vertical axis and space occupying the horizontal.
Hang with me for a second. Even if you're not a computer graphics nut, I think this offers an interesting way to view the world, in the sprit of an earlier post regarding how engineers sometimes find it easier to manipulate information if it's first converted into a different format.
(Links to: Jan Neumann)
Imagine taking a sequence of frames from that movie, above, printing them out, and then stacking them up like a stacked deck of cards. The back card will be the car's starting point, and the facing card will be the car's ending point. Got it? Now imagine whipping out your sharpest knife, slicing through the deck, and peering down on it.
It would look like this:
Since nearby objects whiz past our field of view quickly, they cover lots of ground in very little time. The nearby trees and signposts therefore are the most horizontally-slanted objects in the image above. On the other hand, the school is in the distance, so it appears to creep along as we drive past it. It's nearly vertical in this representation.
Fortunately most of the tracks through the image are linear, meaning that this process can be completed with a minimum of pain.
What if we wanted to "freeze" the motion of the building, so we could synthetically "focus" on it? We'd need to recenter the film plane by shearing this stack of images. By trial and error, it turns out that the building moves 4 pixels to the right for each successive frame.
We recenter the data by incrementally padding it. If we do it correctly, the building's spatio-temperal tracks will be vertical:
Recentered film plane.
Now we just need to eliminate those pesky trees. Real-life lenses are good at doing that, because if the lens is big enough, it can "look around" the trees. In aggregate, little pieces of the lens really do get to see the whole face of the building. And so does our video camera.
We can simulate this giant-lens action by simply averaging over all of those frames. (And thanks to my co-worker, Joshua Napoli, for putting it so simply. Here is his similar project of last year - viewing houses through trees - but his blog post [had been] AWOL.)
What does it look like?
Success. Trees are blurred out. Compare to the photos at the top of this post.How can we take this a step further? We can simulate a variable-focus lens by computing what every possible set of shearing parameters will do. That is, we can tilt our deck of cards by varying degrees so that the space-time path of various objects become vertical, and hence able to be imaged by our gigantic synthetic lens.
Here's how this looks. Let's recenter every 7th pixel so we can "focus" on a tree in the center, and "stop down" the aperture by averaging over fewer frames than we did above.
Still awake? If you're interested in this stuff, try out:
- Email me if you want me to post the Mathematica 7 notebook on my personal website.
- Browse this collection of papers at Stanford, here. MIT is also active.
- RC Bolles, HH Baker, "Epipolar-plane image analysis: a technique for analyzing motion sequences," Proc. IEEE Third Workshop on Computer Vision: Representation and Control (Bellaire, MI), Oct. 13-16, 1985.
- A Adams, M Levoy, "General Linear Cameras with Finite Aperture," Eurographics Symposium on Rendering (2007).
- R Ng, "Fourier Slice Photography" SIGGRAPH 2005.
- W Chun, OS Cossairt, "Data processing for three-dimensional displays," allowed US Patent.
- MW Halle, "Holographic stereograms as discrete imaging systems," in Proc. SPIE 2176, Practical Holography VIII (1994).
- M Levoy, P Hanrahan, "Light Field Rendering," SIGGRAPH 96, 31-42.
ps A big thank-you in advance to anyone who can explain why Blogger: (1) doesn't insert images at the cursor, but rather on top; and (2) why an extra linefeed appears for every paragraph whenever I insert a photo.
16 February 2009
10 February 2009
04 February 2009
For some reason I got a kick out of this paragraph (this is from the OSA):
Although this paper need not be exceptional, it should add significantly to the field for you to recommend acceptance or revision. Lately, a substantial number of papers have been submitted that can be called "not wrong" papers. These are papers that contain no errors, but they also lack any new and useful information that would move your field forward; they may provide no citable results, or document so little progress that researchers in your field will ignore them. These papers take up your time and ours; they clutter up the literature; and they do not advance research in the field. If you find this paper fits this description, you should recommend that the paper be rejected.
03 February 2009
02 February 2009
01 February 2009
Here is a good top-10 review from C|Net Crave (in the UK). From Oct. 2008, but it seems reasonably up-to-date.
(Do you have and like your netbook? Ever written a document on it? Use Linux / StarOffice? Read a technical paper in PDF? I'm curious.)
16 January 2009
Matthew Barney, who "stages timeless fictions in the form of hybrid installations, filmed performances and stylized videos." From what I gather, he films intricately-staged fictional environments, and then exhibits photographs (still frames) of those films. I think. They had it at Mass MoCA once. His Cremaster series is evidently his best-known work. Here's CREMASTER 1, which includes the cinematic trailer.
Thomas Demand also photographs fictional scenes, but he is more likely to create a stark office out of simple materials and than take a snapshot - but we don't realize it's of a fake office. (Once the PHOTOGRAPHS page loads, you can moveover to scroll some of his collection. I like "Studio.") The MoMA exhibition notes say, "Demand begins with a preexisting image culled from the media, usually of a political event, which he translates into a life-size model made of colored paper and cardboard."
Katharina Fritsch's eerie super-sized models of, I don't know, people at meetings and giant rats.
Jenny Holzer, but, hey, everyone likes Jenny Holzer. I mean, a strong sense of duty imprisons you, right?
I'd like to see Henrik Plenge Jakobsen's work in person:
And I doubt I will tire of Jeff Koons. His site is an exhaustive catalog of his work.
Steven Pippin "...succeeds in recalling for a brief moment those sentimental hopes that were once placed in photography and television..."
I wish they had included Arthur Ganson, Anna Hepler, and Steve Hollinger.
14 January 2009
(This post is for the "benefit" of my Facebook friends.) From time to time I like to share my culinary adventures and mishaps. In the middle of cooking this, I thought that it was going to trainwreck into disaster-land - but somehow it all came together. Fortunately this is a keeper for me, rather than an entry from the diary of my disqualification from Home Top Chef. Not yet at least.
- 2 boneless pork chops cut into 1"-cubes
- 1 can of black beans
- 1 can of kidney beans - open and drain the beans now so you don't burn the garlic later
- 1/2-ish cup of chicken stock
- handful of cilantro, chopped
- 1 cup of Chiavetta's marinade (come to think of it, it's unlikely you'd have this on hand, but I suppose salting the pork would get you halfway there) (thank you Bridget)
- 2 cloves garlic, diced
- 5 pieces of good bacon - e.g. Boar's Head - sliced into 1/2"-wide fingers
- 1 little packet of Sazon Goya (why we have this I have no idea)
- Juice of 1 lime
- Rice, like a few cups of brown or white rice
Find yourself a decent sautee pan, like a heavy Calphalon
- Put the pork cubes into a bowl or bag and marinate in Chiavetta's for 15-30 minutes
- Fry the bacon fingers (excuse me, lardons) on medium-low heat until they're done, remember to flip them, and transfer to a paper towel-covered-plate
- Turn heat up to medium-high
- Place the pork cubes into the pan, shaking off the marinade, for about 4 minutes per side. Check the pork to make sure you don't over-cook it. They should brown nicely.
- Transfer pork to a bowl and set aside
- Turn heat back down to medium-low and sautee the garlic until fragrant
- Add the cilantro
- Make sure you don't burn the #&@! garlic, tough guy!
- Add both kinds of beans
- Stir up all that goodness
- Add the chicken stock, Sazon Goya, pork cubes, bacon, and stir
- Bring to a boil
- Bring down to a simmer, add the lime if you actually have it
- Reduce it until you're tired of reducing it; the goal here is to keep it wet
- Cover the pan
- Taste it and marvel at your impromptu brilliance
- When the rice is done, feel free to add it to the pan and mix well, and let it simmer some more. Add some chicken stock to keep it moist, if needed.
- When your tired family comes home from their play date, feed them, and bask in your own glory
Comments? Complaints? Contact our help line.
13 January 2009
Just a quick note; if you are interested in the history of cinema or the ups-and-downs of marketing a disruptive technology, you might want to attend journalist Scott Kirsner's last two East Coast book tour events: Thurs. Jan. 15 (Concord Free Public Library) or Wed. Feb. 11's chat at the Boston Public Library. He'll be discussing his new book, Inventing the Movies.
Also, the SPIE-IS&T's Stereoscopic Displays and Applications conference is next week in San Jose, CA. 20th anniversary! Hoo-ah. You don't need to bring your anaglyph or polarized glasses, because, as all the cool kids know, they give them to you there.
ps Optics people: have you seen the great talks scheduled for the New England Section of the OSA? Photolithography, computational photography, and biological imaging...
10 January 2009
It's possible that I'm drinking too much espresso-that-I-thought-was-coffee, reading webpages about how to use my Moka Express, and traveling to Helvetica-saturated continents. But have you seen the awful redesign of the Tropicana brand?
I was shopping at Stop & Shop and almost missed the orange juice section because it looked like a blankish wall of... I don't know... industrial-grade biochemical products, or no frills dry milk, or signage for a Swedish hospital.
No! It's the Tropicana redesign! Enjoy it here. (@ underconsideration.com )
Don't even get me started on the Eurostile-ification of MA, the newest victim being the Capitol Theatre (following the Alfewife typography mishap, I guess).
Woah, who put the extra snobby in my espresso today?
02 January 2009
I thought this was funny: peHUB's "Translating PE-Speak" by Erin Griffith. But let's stick to our knitting.
Edge posted responses to their 2009 Annual Question, "What will change everything?" Hear what Brockman's posse has to say.