This salon met at Think Coffee on Mercer Street in Manhattan, from 10:30 to 2:30 on Saturday January 14th. The timing worked out well – by arriving that early we were able to secure the big long table that can comfortably seat eight or more. Most people had arrived by 11:00. There were six of us: Nate, Charlie, Daniel, Jarrett, Caleb, and me.
Topics discussed (though I could not track them all as the discussion split at some points) included:
How to use multiple cores in typical scripting situations, such as using VB in Visual Studio, or using Processing? Caleb mentions “Open MP”, a library that allows you to designate some code to be run in a new process… which he believes will trigger another core on your computer. (Charlie, Caleb)
The importance of matrix inversion in a lot of recent computer science and algorithm work – e.g. the Pagerank algorithm, also Amazon’s warehouse robot algorithm, which uses something called “max cut, min flow” graph algorithms. Apparently, multi-threading matrix inversion is a real challenge and there is a lot of computer science work in this area… how to make the raw math more efficient. Caleb kindly followed up the discussion with an email of links: (Caleb)
A useful trick that I did not know for doing rendering work on a Windows (XP or 7) computer: in Task Manager, right-click on any process and choose “Set Affinity”… you can determine which cores on your computer do what, so you can basically ‘reserve’ a core from your renderer, and use it instead for Photoshop or other post-production. (Charlie)
Subsurface scattering: Apparently there have recently been super-slow-mo videos made of light hitting a surface, that show how subsurface scattering works in an extremely high level of detail. Also, long exposure imagery shows that light penetrates apparently opaque limestone in small quantities. I can’t find the actual videos discussed, but here’s one about rendering techniques for subsurface scattering, that shows a very similar mode of analysis/thought (and some good examples). (Caleb)
Brief discussion about rendering plugins for Rhino – I described my recent comparison between VRay’s 1.5 beta, and Brazil… VRay’s usability (that material editor!!! Jesus) is still so painful that I will continue to teach Brazil. Daniel uses VRay in Max a lot, and mentions that he has not bothered to experiment with the new real time features because he already has a strong intuitive grasp of what tweaks are necessary using quick test renderings.
“HCI” vs. “CHI” (Human-Computer vs. Computer-Human Interaction) – i.e. the trend towards using computer algorithms to manage the actions or interactions of humans. This came up due to an ad for a conference we saw on the back of a magazine – it turns out the conference had no special agenda beyond HCI… they just switched the letters, probably to differentiate themselves from some other event. But, nonetheless, more and more interaction is being initiated/driven from the computer side, rather than the human side. And/or, frequently, there is more *thinking* going on in the computer part than the human part of certain operations (e.g. a web search, to be simple about it, but there are probably better examples). (Caleb)
We talked a bit about urban design, and the apparent lack of dedicated urban design software (i.e. one either uses architectural software like Rhino, or planning tools like GIS). Wouldn’t genetic algorithms in urban planning be interesting? Of course yes, but so far (according to a quick Google search) this seems to be the realm of hard planning papers… not good google-accessible imagery produced, alas. (Nate, me)
A critique of Revit came up while discussing teaching. Jarrett: “Revit forces you to work from specifics to generalities” (whereas you would generally want to work the other way around). But I don’t know if there were any really heavy Revit users at the table – maybe Charlie?
Brief discussion of the Arduino and servo motors – Daniel is working on a prototype of an operable surface of some kind. He says he is looking for stronger servo motors.
We talked about teaching digital techniques from a few different angles. I was looking for support for my notion that teaching students to work “intentionally” in Rhino is a valuable thing (i.e. to be able to look at a form and think rigorously about the process one would use to model it), and also the importance of looking closely at material details when modeling. Charlie highlighted what he sees as a disconnect between older notions of ‘craftsmanship’ — e.g. the ability to hand-make a chair from wood — and newer notions of craft that connect to fabrication. I believe his point was (since this is how I think about it, anyway) that current approaches to teaching fabrication tend to favor a somewhat constricted, linear process that just creates multiple outputs and then tries to judge them, whereas to achieve a deeper and more productive intuition of craft takes much more time, iteration, and close contact with the material in question. Charlie mentioned a studio he took at NJIT, taught by Anthony Carradano, where they started with models of chairs, then had to extract a structural system from that, rethink the materials, and develop this into a tectonic strategy of some kind… interesting comparison point for my Arch 211/213 curriculum at Pratt.