GH scripting and workflow thoughts - part 2

January 17, 2021

Recently, I've been spending time developing a series of scripts previously created for specific projects, and amending them, and turning them into general tools intended for use by the entire office. This means intended for use by novices and experts to Grasshopper alike. Somewhere along the way, the exercise became a investigation of a design scripting philosophy with the focus being the following questions: how can I write scripts or tools which are easy and intuitive to use by novices, and at the same time satisfactory and conducive to addition and amendment for advanced users? This was the more high level question, while the more direct, practical question was: how do I combine our various tools into a workflow that make sense in the broadest imaginable application. I discussed the first question in another post, while the second follows here.

The goal here was to identify the most common time-consuming activities in our theoretical ideal workflow paths, and the locations where the strengths of parametric design could be leveraged to improve said workflows. Let's double click on that statement and rephrase it in English: for most projects (let's say 80%+) we can easily imagine there's a relatively straight line of progression from concept to finished product. Of course, many people would disagree with that statement, claiming that design is a wavy, loopy, iterative process. What I mean is that we rarely experience our projects travelling in the opposite direction - projects generally move from less detail to more detail over time. This is reflected in the design actions, methods and tools that are employed. In the case of 3D modelling, we could say we generally move from large-scale massings to intricate BIM-models, in steps of increasing detail and clarification. So as an engineering team that follows this process, we are applying our efforts to designs of simultaneously increasing resolution and complexity. And to the point, we want to be able to take general 3D model concepts, as provided by our architect colleagues, and not only be able to transform these into parametric models (explicit 3D to implicit most often) but also convert to a) analysis models and b) BIM models. Additionally, we want to retain a 2-way link, such that changes both up- and downstream of the conversion can be transferred across. So the goal is to have one or several cooperating methods which enable people to convert between these model types, retaining the information stored and the changes made. As the BIM model remains (to my knowledge, or ability - especially as I have very little experience with Dynamo) too rigid to act as the central model for this purpose, instead the choice falls on the Rhino model as the hub-model from which we, using grasshopper as the actor, can convert the same model to both analysis models (using coded APIs) and to BIM/Revit (using RhinoInside).

I've worked on a methodology which tries to capture part one of the above: the linear progression (i.e. without two-way link). Here, Rhino is used as both a geometry- and data-container, and so becomes a form of lightweight BIM. Using object attributes, an overlooked but native part of Rhino, we can add any amount of information in the form of key/value pairs to every object. I cannot overstate the power of this, and it has direct effects on what you can transfer downstream e.g. to Revit, but also to how you can shape your upstream scripts for both geometry and data manipulation. The methodology is characterized by an overall progression, broken into bite-size steps, which retain the possibility of looping back to make changes at a previous stage. The Rhino model is never manipulated manually, and all data is organized using the layers and the aforementioned attributes. Each script, comprising a step of the way, references data from the Rhino model, and finishes off by exporting new or amended geometry back to Rhino. In this way, all scripts or blocks within a scripts communicate with each other only indirectly through Rhino. Each step is self-contained, easily understood and easily changed when necessary - as long as the output format is retained, you do not have to be afraid of breaking code further downstream. Thus a larger operation can be broken into manageable steps, ensuring simplicity and reducing computation scope per script which increases speed.

Next up is the two-way link. The reason here being, that once a model has been exported to either analysis software or to BIM, if changes above a certain threshold are made, it would be significant time-saver to be able to propagate these changes back through to the other model types, instead of doing the update work in series in multiple models. Of course, this requires a very high level of geometry interoperability, and it must take advantage of fact that analysis models are often accessible as text files, which can be reimported after changes are made. More on that to come.