Grasshopper and AR/VR for Rhino UGM – February 2019
With Grimshaw, Fologram, Foster + Partners and Chaos Group
For our first User Group Meeting of 2019 we are combining our usual Grasshopper UGM with our VR/AR user group.
The group is for those who are interested in meeting in order to network, discuss and explore Grasshopper3d and virtual and augmented reality solutions for Rhino3d.
The meetings follow a simple format of at least one presentation from a customer with experience in this field, followed by group discussion and informal pleasantries.
Confirmed Presenters are Grimshaw, Fologram, Foster + Partners and Chaos Group.
Details: This meeting took place on Thursday 21st February 2019 at Grimshaw, 57 Clerkenwell Road, London, EC1M 5NG
Special thanks to Andy Watts and the team at Grimshaw for hosting this latest UGM.
Meeting notes are available at the bottom of this page.
Preceding this UGM there was a 3-day workshop with Fologram on 19th, 20th and 21st February 2019 at Grimshaw – find out all the details about this workshop here on the Simply Rhino site.
Presentation by Grimshaw
Within the Design Technology team at Grimshaw, the VR and computational design are key components of the work our team undertakes to support our design teams and research new ways of working.
The use of VR has grown to become an well-established part of the design toolset at Grimshaw from internal reviews and design checking through to client presentations and stakeholder engagement. More recently, AR has shown the potential to introduce a new facet to this, overlaying our design information on a more readily understandable physical context, be it at full scale or otherwise.
From projects such as Waterloo International through to Dubai 2020 Expo opening next year, the work of Grimshaw has always had a strong relationship with computational design. Today, tools such as Rhino and Grasshopper are now integral to our everyday work.
Recently, our in-house Design Technology team had been looking at ways of merging these two key work-streams together. Whether through bespoke in-progress workflows or through the use of more developed tools such as Fologram, we are actively seeking ways to enable our teams to harness the power of computational design tools such as Grasshopper in an immersive 3D environment.
Presentation by Fologram
Fologram is a toolkit that allows designers to build interactive mixed reality (MR) applications quickly within Rhino and Grasshopper. By providing users with access to device sensor data (spatial meshes, gesture events and computer vision tools running on camera feeds) as inputs to parametric models, the full ecosystem of Grasshopper plugins (physics simulations, structural and environmental analysis, machine learning etc etc) can be extended to run in mixed reality. Gwyllim Jahn and Nick van den Berg will demonstrate applications developed with Fologram by partners and clients that augment existing processes of design, modelling, analysis and making.
Designing and making within mixed reality environments extends the skills and capabilities of designers and builders by improving spatial understanding of design intent and reducing the risk of human error associated with extrapolating 2D instructions to 3D form. These new capabilities dramatically improve the ability of conventional craftsmen and construction teams to fabricate structures with significant variability in parts, form, structure, texture, pattern and so on, and in many cases completely reverse design viability as impossibly expensive and difficult proposals become straightforward, low risk and cheap. Complex designs can now be fabricated on standard building sites, with cheap materials and tools, and without expensive expertise or design documentation.
We will discuss work from Fologram that investigates the implications of MR assembly methodologies on architectural design through the lens of several architectural prototypes. Could making in mixed reality allow us to refigure CAD-CAM not as a means of working to high degrees of tolerance and precision but instead as a return to craftsmanship, intuition and reflexive making? How will the medium of MR enable new forms of collaboration between designers and manufactures, or between humans and machines? What new architectural forms might be found in this superposition of the digital and the craftsman?
At the end of the presentation there will be the opportunity to have a brief demonstration of the Fologram toolkit on the HoloLens and mobile phones, and discuss applications within research, teaching and practice.
Check out Fologram’s vimeo channel to see Fologram at work.
Presentation by Jonathan Rabagliati from Foster + Partners
The Bloomberg Ramp | Rising through the centre of the building, the distinctive hypotrochoid stepped ramp, animates the whole Bloomberg office space. Fabricated with a steel monocoque, the ramp is clad in bronze panels. Their form is based on a mathematical curve called a hypotrochoid, that forms a smooth continuous three-dimensional loop between that rises up to the skylight. Each loop cuts through a near elliptical opening in the floor plate and these elements, rotating through 120 degrees on each level, which creates dramatic views that open deep into the building.
The ramp is central to the way Bloomberg chooses to operate, embodying a sense of movement and dynamism through its form and function. The ramp is conceived as a place of meeting and connection, between people and parts of the office. As the primary connection between the floors, it acts as a great social condenser for the building, bringing both life and light to the building.
The presentation by Jonathan Rabagliati charts the story of design through fabrication, using computation design, VR, laser scanning and metrology and close collaboration with structural Engineers and contractors to realise a remarkable design.
Presentation by Chaos Group | V-Ray Next: Immersing in Parametric Design
With 15 years on the front of expanding the possibilities of visualisation, Chaos Group have grown to have groundbreaking performance as the readily expected feature of every subsequent release. And moved to pushing the limits of what is generally possible to visualise. No better testament to that than V-Ray’s latest Next line, even more so – with the expected V-Ray Next for Rhino.
On February 21 CG Specialist Lyudmil Vanev is coming on stage with something way more powerful than a new version presentation, however exclusive it may be. Lyudmil is showing our whole take on the way designers can be seeing and experiencing their design. And adding substance to our concept of visualisation as a design tool, with an integral role in every stage of the design process.
You will get a detailed exclusive preview of the way how V-Ray Next for Rhino adds completeness to an approach we started in V-Ray 3 – the direct access to rendering from within Grasshopper, without the need to exit, bake, etc. V-Ray’s entry directly in the parametric toolset moves towards providing interactivity and a new depth of immersion and understanding of changes, parameter impact, and design evolution. Through simply giving a new set of eyes to the designer – to see all right where it happens, straight within the parametric script, changing in real time and with the most realistic materials, if needed. Which also makes the above the main reason and entry point into interactive immersive virtualised design with V-Ray Next for Rhino.
Operating straight within Grasshopper, V-Ray brings its complete list of features and powers, straight to rendering animations and supporting VR Scans. furthermore, it brings two major opportunities – GPU rendering for speed and computing power; and bridging to V-Ray for Unreal, to provide a seamless transition from the parametric plugin into interactive virtual setups.
So – interactive, fast, realistic, gamified parametric design. Firsthand, first time and with a hint on the next areas of research and development straight from the team.
Grasshopper3d and AR/VR for Rhino3d Meeting Notes | Grimshaw | February 2019
Georgios Tsakiridis | Grimshaw | How does VR make a difference in the design process?
VR and AR sit within the research cluster of Grimshaw’s Design Technology department, where they have champions of areas of interest and research. They try to be first adopters, and had an AR exhibition about four years ago and now use 360° VR scenes as a standard project deliverable. They also have a VR cave available when the project can justify it.
VR as a new way of working : In general, a small group within the practice will explore new technologies first and then roll them out across the practice. But VR can be a relatively simple technology which everyone see results quickly, so does it change what and how people design?
They set up rooms for Vive rigs and gave small headsets like the Samsung Gear to design teams. The use of these simple headsets mean that the design teams use them in the design process itself, and not just in client reviews. They started with simple workflows, using N-Scape and Iris VR, offering quick and reliable output and being accessible to the team.
For stakeholders, VR gives unmatched clarity, without the distortions inherent in CGIs.
MIPIM was a key first showcase, where they demonstrated a model of their Dubai Sustainability Pavilion, but the ‘Heathrow Horizon Community’ engagement was perhaps more important because they created a set of 360s of key passenger journey points. The ‘Horizon’ are a group of frequent flyers who were shown VR scenes of a generic airport pier environment, and were asked to assess their perceptions of the width, amenities, comfort and so forth. VR allowed swift engagement in the complexity of systems of an airport, with members of the group even being able to start to plot out airport layouts themselves.
These early experiments led to the development of a wishlist for VR in the practice. These included; better design tools, integration with Rhino and Grasshopper, easy to customise interfaces, scene interactivity, live linking between applications, and some Augmented Reality.
Mixed Reality with AR
The journey is now towards the mixed reality world of AR. To explore this, Grimshaw hires a specialist games designer and started working with Fologram, thanks in part to its easy workflow from Rhino. They saw its immediate potential so implemented AR on quick review sessions, e.g, dynamically adjusting a stadium roof – a technology that is far quicker than the equivalent 3D printing process for design review. They have also tried AR at the masterplan scale, being able to see the impact of adjusting the volumes of buildings in relation to one another. AR here has a distinct advantage over VR in that the ‘sunglasses’ style of headset means that the user can stay in the conversation taking place around the model.
Grimshaw have also been developing custom apps and engaging with video game platforms Unity and Unreal. However, this is time costly and requires coders in the team. But it does provide for a degree of photorealism and the animation of elements such as the doors on an underground train. The user finds themselves in much a more immersive place than before.
Grimshaw feel that they are still in the ‘humble beginnings’ of working with AR as an in-house technology. They are still exploring the tools and workflow, but it’s a priority for investment. The ‘holy grail’ is that of interoperability: can you connect Rhino and Grasshopper with video game engines? Well, that has been happening for a while, but what has gotten the team excited recently is the ability to run ‘Rhino Inside’, with the software being called from within other platforms.
Go-Rhino-Go
‘Go-Rhino-Go’ is an open source GitHub project that was developed at a hackathon in New York in conjunction with architects from Foster & Partners and others. It allows you to call Rhino and relevant libraries to permit the real–time building of geometry in rhino from the Unity interface, and combining these two worlds in a collaborative situation. It will never replace Rhino, but it’s an in–between sketching tool, with a really big potential which they want to explore further. There are certain limitations due to how Rhino is developed but they are in discussion with McNeel and Go-Rhino-Go is open source so they’re keen to see a community grow.
The advantage of game engines is that they have a certain power to narrate and to communicate complex messages within a simple frame of constraints — so it becomes less about where you do the calculations, but about what systems like Unity can give us. So, as a result Grimshaw have just welcomed a game developer to their team, which is a new breed in the world of AEC.
Lyudmil Vanev | Chaos Group | V-Ray Next for Rhino
Firstly, it will be smarter, so smart that it takes optimisation decisions for you. A new asset editor allows common libraries, which are stored where you want, not in V-Ray. It features a spline curve editor for value manipulation (eg; hue, saturation) and they have added metallic PBR style shaders. There’s a light editor, where you can set up lights in the editor without making test renders in scene, and a lighting heat-map analysis tool as well as new multi-matte elements for compositing.
V-Ray next also has two new patented algorithms governing scene intelligence.
There’s an adaptive dome light that can use image based lighting. There’s no need for light portals any more, just use the dome. V-Ray Next now has auto exposure and white balance for scenes, so V-Ray can create perfect lighting for you, and it will also handle the difference between interior and exterior lighting.
Next has cut render time by 2 to 5 times, even up to 11 timesin some cases. Next is generally 20-50% faster for exterior scenes. And with GPU processing, up to 18 times faster (again in 3DS Max). The general message is that you can achieve more with less.
Denoising was good in version 3.6, but had only one algorithm. It was perfect for cleaning up the end result of a visualisation process, but what about faster workflow? So they have added a new denoiser using Nvidia AI, which is fitted with thousands of denoising patterns.
VRScan GPU
Chaos Group’s material scanning technology has been in development for 10 years. You can put any material sample inside and VRScan captures mathematical data of every single direction. Clients used to complain that programmed materials don’t look like their material samples and you end up spending weeks tweaking them and they’re still not happy. But with the scanner they look real.
V-Ray for Grasshopper
V-Ray allows you to render grasshopper without baking geometry. This leads to the ability to create animation in Grasshopper and render it directly, by having a V-Ray Scene node Grasshopper. You can also create materials in Rhino and manipulate them in Grasshopper. Grasshopper can also control the lighting, camera and sun and again create dynamic scenes all without baking.
Overall, these new items are about a tenth of what’s coming…
V-Ray’s VR and AR Pipeline
Using the vrscene transfer format is a great solution for taking work into Unreal. It does have one limitation — that everything should have be a texture as Unreal doesn’t accept procedural defintions. There’s also still the need to export, as there’s no live connection yet. But the V-Ray scene file does contain all the geometry, lighting etc and V-Ray for Unreal converts shaders, lighting etc., into to native Unreal definitions. In the Unreal settings, you can directly select V-Ray denoisers and other features, and V-Ray will bake all of the lighting within Unreal — you can even manipulate the bakes in Photoshop as they are not hidden away.
Project Lavinia
This is a new real-time ray-tracing engine viewing system, based on Nvidia DXR technology. It’s a drag and drop viewer for V-Ray scenes created in any V-Ray platform. It can handle scenes with billions of polygons without prebaking, or faked reflections. Where is it going? Will it be useful? Feedback to Chaos Group please! They have the alpha for 3DS Max out already, and Rhino is coming.
Long Nguyen | Research Associate at the University of Stuttgart | C# Scripting and Plug-in development for Rhino
Long teaches classes which start assuming no knowledge of C#, but during the course of the workshops the students learn it and get to develop their own plugins. He also shows algorithms for computational design, to achieve logic not possible within the visual parameters in Grasshopper alone. He also teaches good clean programming practices, to enable the creation of plugins that can be packaged and distributed commercially. Example use cases might include getting elements to obey rules, e.g. don’t self-intersect, or to study liquid erosion of a terrain. The next introductory classes with Simply Rhino will be in June.
Advanced classes coming soon.
In September, Long will also offer advanced versions of the workshop, for example parallel computation in C# for proximity checking or how to make a Grasshopper plugin to undertake heavy calculations in the background without freezing the main user interface.
Jonathan Rabagliati | Foster and Partners | The Bloomberg Ramp
The project for a grand ramp in the Bloomberg building in the City of London started 7/8 years ago, but has its roots in work done by the practice 20 years ago at the Reichstag in Berlin, and later at the GLA Building in London. There they had learned some of the tricks about how to create a minimal and smooth appearance while satisfying code requirements for level sections within the slope.
At Bloomberg, the attempt was to build a building with huge internal area but which respects the medieval street pattern. In heart of north zone of the plan, there is a huge triangular space and atrium with ramp that rotates as it passes each floor. It’s not just a conduit for people but it’s also a part of the ventilation strategy.
One of the challenges was how to the get clients’ head round what they were designing. They did 3D prints and presentation models and they did lots of renders. But the development process was necessarily complex — having designed the model parametrically in Grasshopper, every ‘frame’ in the animation of the ramp was its own Rhino surface model — so there could have been an infinite number of different ramps.
Jonathan is passionate about curvature and to use ellipses as the basis for the form disturbed him. The inherent tightening and loosening of the curvature was no good, and he wanted a more elegant solution. So he did a curvature plot of the acceleration and deceleration of the curvature and it revealed unwanted kinks. So he used an equation that is in fact just like a Spirograph: rolling one circle around another. The ratio between the gears on the moving wheel versus the circular frame creating a trochoid. And in turn you end up with the setting out of the ramp, with the skylight above being defined in a similar way.
There was then a dialogue to and fro with Adams Kara Taylor engineers to refine and simplify the geometry. The beauty was that you could pull out the structural model and plug it into his hypotrochoid model, that would then update the engineers’ model at same time. This process eliminated lags in coordination, but required common naming conventions and a shared language to make the collaborations work. Rather than wasting weeks on coordination they could get on with building the Grasshopper model and doing detailed analyses of load cases for all 96 steps and knock-on effects on all the other steps. It created a matrix of data that could be interrogated, and the efficiency then freed up engineering resource to undertake a far more in depth study than is usually done. Overall, it reduced the uncertainty factors due to greater clarity.
Full scale prototyping was very important for user comfort, and also to convince the district surveyor that the proposed gaps around a glass infill panel at the landings of the ramp would be safe.
In a combination of precision and brute force, they ended up with using a contractor for the bronze cladding panels who was based in Japan, with the substructure being created at Littlehampton Welding, with the elements coming together after a series of overnight deliveries into the City of London. For the contractors, they made a simple set of instructions listing the variables for each element and diagrams—which became a 96-page method statement of how to build it.
During the design process, Michael Bloomberg visited a mock-up, and being of lesser stature, he questioned the height of the balustrade and wanted it lowered, to which the senior partner at Fosters said ‘yes!’, not knowing the consequences. But through another equation, the team were able to find a solution. In January 2014, they made a pioneering use of the Oculus Rift in one of first projects to test different options, not just for client review. They had to find a way of smoothing the curves of the lowered balustrade but retain the setting out at the floor levels. To resolve this they had to introduce an s–curve to smooth shapes—but it had to do so imperceptibly. So they tried various s-curves to maximise the effect and used VR to see which looked best.
The whole development process took months with the Rhino geometry eventually transferring to a fabrication model produced in Catia within a tolerance of .004mm. Then came the amazing bit: yes, the fabricators had an accurate model, but did they build it right? And will cladding fit? To check this, they did three 1 billion point laser scans of the installed substructure. They then put that dataset in Rhino and tested against their design geometry and colour coded it for clash detection. The maximum deviation was 24mm over 6 or 7 floors, meaning there were a few areas where had to alter the geometry. But by this time the very beautiful and very expensive Japanese bronze cladding was landing at Tilbury Docks and couldn’t be changed. So the adjustment process was to combine the scan and the solid model to create a virtual model where they could ‘jiggle’ the bronze panels using the minimum shift distributed across them all, and set up rules of how periodic 10mm shadow gaps could be tweaked accordingly.
Although they were using metrology with sub millimetre precision, in the end the specification writing was key. The spec just called for ‘a smooth continuous curve’ — just a few words as opposed to all that data..! And you find yourself reverting back to written definitions, such as “plus or minus 2mm” and end up arguing on site when the contractor points out that this actually allows for 4mm of mis-alignment, because its plus 2mm on one panel and minus 2mm on the next. The moral here is that for all the computational sophistication, don’t disregard the specs!
It’s all very well designing or making things with these tools, but the process of actually realising something like the Bloomberg Ramp is just as fundamental and crucial. And don’t lose sight of the fact that the end result is about simple human interactions: the ramp enables casual interactions and conversations to take place. And one final nice reward was that the plan of the ramp was adopted as a logo for the building.
Gwyllim Jahn | Fologram | Making in Mixed Reality
Fologram are building software for mixed reality devices, so designers can use them for design and making. They’re interested in how you go from design packages to making things in the real world, without 2D drawings.
The AR technology was originally always about helping fabrication and comes from work done by two Boeing engineers to enable accurate placement of systems within an airframe under construction. Now it can still be used for precise registration, but also for shared experiences and to build natural, intuitive interfaces.
Fologram work with the Microsoft Hololens, which offers precise tracking, but the downside is the need to develop in Unity. So they have made a bridge from Rhino and other platforms.
Their target is to produce reductions in time and cost risk in experimental architecture. A case in point is Frank Gehry’s Dr. Chau Chak Wing Building at the University of Technology, Sydney. There, the undulating curved brickwork façade had to be installed by an expert team with painstaking precision, meaning that a bricklayer who was used to laying 400+ bricks per day was down to 80 bricks a day.
There was a clear need for a way to make the process simpler and faster, and avoid the need that Gehry’s had of providing setting-out information for every single brick. And to find a way to be able to use less skilled employees, not just master bricklayers. And for them to be able to work in parallel. So Fologram did a small test build of a sinuous brick façade, using local ‘brickies’, where they were able to build in one day what would otherwise have taken weeks, because each of the crew could see a projected hologram of exactly where the brick should be. The brickies themselves were super-excited as they could use less skilled labour alongside masters leading to better fees, a faster installation and a better result for the architects.
Fologram also work with art fabricators, who can use virtual templates to rapidly develop work as they go, without a steep learning curve.
It’s a case of using old tools for new tricks. Can we rethink old design tools? Now you can stream a model to multiple devices so can have collaborate modelling without cad skills. With Fologram you can have three people work on one Rhino document simultaneously, just using three iPhones.
A classic test are three-dimensional Voronoi diagrams; can these be done quicker using these tools? Now you can combine the precision of digital modelling with the ability to overlay analog tools, all without 3D printing. They overlaid a Voronoi hologram from Grasshopper into the workshop of a Chinese fabricator, who just had to follow the hologram and bend the metal components till everything is just right. You can even then use Fologram to augment the physical object with AR animated elements, like a breathing skin.
And the system is very lightweight, with the ability even using a laptop to combine a live 3D scan of a space with Rhino models and interact with it using an iPhone. All of which can be done anywhere in world with just a WiFi LAN and a phone hotspot.
There’s a free mobile Fologram app available from their website and they are about to debut exciting new developments following the launch of the HoloLens 2.
Next Meeting! Our next Grasshopper User Group Meeting takes place in Manchester on Thursday April 4th 2019 at Arup. See here for all the details on the presenters and how to book your place.