Martyn Day, Author at AEC Magazine https://aecmag.com/author/martyn/ Technology for the product lifecycle Wed, 16 Apr 2025 06:03:56 +0000 en-GB hourly 1 https://wordpress.org/?v=6.6.2 https://aecmag.com/wp-content/uploads/2021/02/cropped-aec-favicon-32x32.png Martyn Day, Author at AEC Magazine https://aecmag.com/author/martyn/ 32 32 Motif V1: our first thoughts https://aecmag.com/bim/motif-v1-our-first-thoughts/ https://aecmag.com/bim/motif-v1-our-first-thoughts/#disqus_thread Wed, 16 Apr 2025 05:00:34 +0000 https://aecmag.com/?p=23592 The BIM 2.0 start-up's first product is perhaps not what you expected it to be

The post Motif V1: our first thoughts appeared first on AEC Magazine.

]]>
At the end of March BIM 2.0 start-up Motif, which recently came out of stealth, launched its first product, and it’s perhaps not what you expected it to be, writes Martyn Day

With its stated aim of developing a next generation BIM tool to rival Revit, Motif’s initial offering was bound to be a small subset of what will be the finished product. In AEC Magazine, we have explained this many times before, but it’s worth saying again – the development of a Revit competitor is a marathon and all the firms that are out of stealth and involved in this endeavour (Qonic, Snaptrude, Arcol and Motif), will be offering products with limited capabilities before we get to detailed authoring of models.

Motif V1 is a cloud-based tool which aims to address a range of pain points in architectural engineering and construction workflows, particularly in the design presentation and review phases. From what we have seen of this initial offering, it’s clear that Motif has identified several features which you would typically find across a number of established applications – Miro, Revizto, Bluebeam, Speckle, Omniverse and many CDEs (Common Data Environments). This means that there’s no obvious single application that Motif really replaces, as it has a broad remit. Talking to CEO Amar Hanspal (read our interview), the closest application the company is looking to as a natural replacement for is Miro, which became popular during Covid for collaborative working. As it’s browser-based it works on desktop, laptop or tablet.


Find this article plus many more in the March / April 2025 Edition of AEC Magazine
👉 Subscribe FREE here 👈

Ideation assembly

The initial focus of the release is to enhance design review workflows by offering a more connected and 3D-enabled alternative to Miro. Users can collate 2D drawings, PDFs, SVGs and 3D models from a variety of different sources, to bring them into the Motif space for the creation of presentations, markup and collaboration.

The primary sweet spot is for collating project images and drawings into Concept presentations, using an ‘infinite canvas’ which can be shared with team members or clients in real time. Models can be imported from multiple sources and views snapshot, drawings from Revit added, material swatches for mood boards, images of analysis results, pretty much anything. These can be arranged collaboratively and simultaneously by multiple users and the software neatly assists in grid layout with some auto assistance. There’s also the ability to add comments for team members to see and react to.

Motif recognises that a data centric approach is essential in next generation tools. With this aim in mind, Motif borrows some ideas from Speckle, offering plugins for a variety of commonly-used design tools, such as Rhino and Revit. These plugins offer granular, bi-directional links to the cloud-based, collaborative Motif environment. One of the special capabilities is the live broadcasting of objects from Revit as they are placed, with Motif displaying the streamed model.


It’s possible to run Revit side by side with Motif, with Motif automatically synchronising views. As geometry is added to Revit it appears almost instantly in the Motif view. This is food for thought, as it makes live Revit design information available to collaborative teams. While this is Speckle-like there’s no need to set up a server or have high technical knowledge.

Motif facilitates granular sharing of information through “frames,” allowing users to select and share specific subsets of data with different stakeholders. The software translates data from native object models (e.g. Revit) into a ‘neutral internal object model’ (mesh and properties) which allows it to connect with different systems.
Buildings can be manipulated in 3D and there’s smart work plane generation. This might not be super useful right now, but we can imagine how it will play out once the BIM modelling tools get added in. For now, images can be applied to surfaces and freehand 3D markup and surface-based detection give the software an uncanny intuition for selecting surface planes and geometry when the mouse is near.



It’s possible to make markups to these ingested objects in Motif, and somewhat amazingly these comments can also be seen back in the Revit session. For now, though, there’s no clash detection or model entity editing available in Motif – its initial use is design review. Motif stores all the history at an object level, allowing users to go back in time to previous states of a project and see who changed what.

The product’s interface is wonderfully uncomplicated with only nine tools. The display feels very architectural, presenting ‘model in white’ with some grey shadowing.



<





The data model

The underlying data model is important. Motif uses a ‘linked information model’ based on the idea that in AEC all data is distributed data. Instead of trying to centralise all the project information in a single system, which is what Autodesk Docs / Autodesk Construction Cloud (ACC) does, Motif aims to link data where it resides and assumes that no single system will have all the necessary information for a building. So instead of ingesting and holding all the data to be one version of the truth, somewhat trapping users in a file format, or cloud system, Motif will pull in data for display and reference reasons. In the future we guess it will be mixed with its own design information.
Motif is intended to be ‘pretty open’ according to the team, with plans to expose the API and SDK to allow users and developers access to extract and add their own data and object types.

At the moment the teams are developing plugins to connect Motif with various commonly-used BIM and CAD applications, including Grasshopper, Dynamo, SketchUp and AutoCAD, in addition to Rhino and Revit which are already supported.




Business model

At the early stage of most startups, having a sales force and actively selling an early version of an application is usually a low priority. Instead, many startups just seek early adopters for trial and feedback. Motif, while being in development for almost two years already has a small sales team and is actively selling the software for $25 a month per user. Hanspal says this is to ensure good discipline in software development, to provide scalability, performance, and responsiveness to customer feedback. The initial adoption is expected to come from companies looking to replace parts of their Miro workflow.

Conclusion

Motif fully intends to take on Autodesk Revit in the long term. CEO Hanspal realises this is a multi-year marathon, so while the team develops a modelling capability, it is utilising elements of its current technology to provide collaborative cloud-based solutions for a variety of pain points which they have identified as being under-serviced.

For now, the company aims to develop a cloud-based 3D interface for project information which will not necessarily replace existing BIM or drawing systems but will act as an aggregator and collaboration platform for those using a wide array of commonly used authoring tools. The software comes to market with an interesting array of capabilities, which may seem basic but provides some insight into what’s coming next – the bi-directional streaming between authoring tool and Motif, the deep understanding of Revit data, models and drawings, Revit synchronisation, connectivity to Rhino and smart interaction with model data all impress.

There may be some frustration with obvious capabilities that are currently omitted, such as simple clash detection between imported model geometry but we are sure this is coming as development progresses.



What Motif does, it does well. It’s hard to pigeonhole the functionality delivered when compared to any other specific genre of application currently on the market. Many will find it’s well worth having for the creative storyboarding alone, others may find collaborative design review the key capability. Those that can’t afford Omniverse might love the ability to have an application that can display all the coordinated geometry from multiple applications in the cloud for project teams to see and understand.

t’s important to remember that this is a work in progress and as the software develops its capabilities, it will expand into modelling and creating drawings. Its tight integration with Revit will be useful and reassuring
to those who want to mix and match BIM applications as the industry inevitably transitions to BIM 2.0.

Meanwhile, the Motif team continues to grow, adding in serious industry firepower. After hiring Jens Majdal Kaarsholm, the former director of design technology at BIG last year, the company has added Greg Demchak, who formerly ran the Digital Innovation Lab at Bentley Systems, as well as Tatjana Dzambazova formerly of IDEO. Demchak was an early recruit at Revit before Autodesk acquired it and Dzambazova was a long time Autodesk executive, deeply involved in strategy and development of AEC, reality capture and AI. It seems the old gang is getting back together.


Interview with Amar Hanspal, CEO, Motif

Martyn Day: For this first product, what was the rational in bringing out this subset of features. They seem quite disparate?

Amar Hanspal: What we are trying to do, over multiple years, is build out a system that you would call BIM, to provide everything you need to describe a building and create all the documents that are necessary to describe the building. There are four key elements, plus 1: modelling, documentation, data and collaboration. And then the plus one is scripting.

The data part is all about how it’s managed, stored, linked, represented and displayed for a customer, which is the user interaction model, around all of this. Scripting is just automation across all of these four things. And we have always thought about BIM that way.

We know people will react to the initial product because they see the user interface and think we are doing markup and sketching. But behind the scenes, these are just the two things that got ‘productised’ first, data handling and collaboration, while we build towards the other capabilities.

Our philosophy around data is, no matter how we store it, fundamentally, no system is going to have all of the data necessary for a building. So instead of trying, like ACC tries to centralise the information – and while you will always have some data in your system, I think the model we’re trying to bring to bear is a ‘link information model’, like the idea that you’re watching us bring with the plugins and the round tripping of the comments. We’re going to assume that data is going to stay where it is, and like the internet, we have to figure out a linking model, sharing model, to bring it together.

You can look at the app where it currently is, which features a couple of core concepts that we’re trying to bring to market – this distributed data idea, and then the second one is the user model on top of it, enabling sharing.


Martyn Day: You have been talking with leading AEC firms for two years. How will you go from this initial functionality to full BIM?

Amar Hanspal: We can’t wait ten years, like Onshape to Fusion to get all the capabilities in there. So what’s the sequencing of this? From sitting down and talking to customers, the design review process that they were implementing, we product we ran across the most was Miro. For design review many are using a Miro board. They would express frustration that it was just a painful, static, flat process. That’s where our ‘light bulbs’ went off. Miro is just collages and a bunch of information. Even when we become a full BIM editor, we’re still going to have to coexist with Tekla,  Rhino, Tekla, some MEP application. We actually have to get good at being part of this ecosystem and not demanding, demanding to be the source of truth for everything.

It gets us to the goal that we’re looking for, and we’re solving a user problem. So that’s how we came up with what we were going to do first, a Miro workflow mirror, and some companies are doing design interview using Adobe InDesign. Over time, we can become more capable of replacing some of the things that Bluebeam and Revizt


Martyn Day: With the initial release you have started selling the product, many start-ups put off developing sales to get early adoption?

Amar Hanspal: It’s good discipline. It’s like, eating your vegetables. When you ask people for money, you have to prove value. It’s good discipline for us to deliver something that’s useful to customers, and see them actually go through the process of making decision to spend money on it because they see how much it’s going to help or save them. That’s really obviously Martin, why we’re doing it. Just good discipline. Fundamentally, we want to make sure that we’re professional people developing software in a professional way, it forces us to be good about handing things like scalability, performance.


Read our extended interview with Motif CEO, Amar Hanspal


The post Motif V1: our first thoughts appeared first on AEC Magazine.

]]>
https://aecmag.com/bim/motif-v1-our-first-thoughts/feed/ 0
Autodesk Tandem in 2025 https://aecmag.com/digital-twin/autodesk-tandem-in-2025/ https://aecmag.com/digital-twin/autodesk-tandem-in-2025/#disqus_thread Wed, 16 Apr 2025 05:00:02 +0000 https://aecmag.com/?p=23398 Autodesk’s cloud-based digital twin platform, is evolving at an impressive pace. We take a closer look at what’s new.

The post Autodesk Tandem in 2025 appeared first on AEC Magazine.

]]>
Autodesk Tandem, the cloud-based digital twin platform, is evolving at an impressive pace. Unusually, much of its development is happening out in the open, with regular monthly or quarterly feature preview updates and open Q&A sessions. Martyn Day takes a closer look at what’s new

Project Tandem, as it used be known, was initiated in February 2020, previewed at Autodesk University 2020, and released for public beta in 2021. Four years on, there are still significant layers of technology being added to the product, now focussing on higher levels of functionality beyond dashboards and connecting to IoT sensors, adding systems knowledge, support for timeline events and upgrades to fundamentals such as visualisation quality.

Tandem development seems to have followed a unique path, maintaining its incubator-like status, with Autodesk placing a significant bet on the future size of an embryonic market.


Find this article plus many more in the March / April 2025 Edition of AEC Magazine
👉 Subscribe FREE here 👈

For those following the development of Tandem the one thing that comes across crystal clear, is that creating a digital twin of even a single building — model generation, tagging and sorting assets, assigning subsystems, connecting to IoT, and building dashboards — is a huge task that requires ongoing maintenance of that data.

It’s not really ‘just an output of BIM’ which many might feel is a natural follow on. It has the capability to go way beyond the scope of what is normally called Facilities Management (FM), which has mainly been carried out with 2D drawings.

The quantitative benefit of building a digital twin requires dedication, investment and an adoption of twins as a core business strategy. For large facilities, like airports, universities, hospitals – anything with significant operating expenses – this should be a ‘no brainer’ but as with any investment the owner/operator has to pay upfront to build the twin, to realise the benefits in the long tail, measured in years and decades. This, to me, makes the digital twins market not a volume product play.


Autodesk Tandem


Tandem evolution

My first observation is that the visual quality of Tandem has really gone up a notch, or three. Tandem is partially developed using Autodesk’s Forge components (now called Autodesk Platform Services). The model viewer front end came from the Forge viewer, which to be honest was blocky and a bit crappy-looking, in a 1990s computer graphics kind of way. The updated display brings up the rendering quality and everything looks sharper. The models look great and the colour feedback when displaying in-model data is fantastic. It’s amazing that this makes such a difference, but it brings the graphics in to the 21st century. Tandem looks good.

As Tandem has added more layers of functionality the interface tool palettes have grown. The interface is still being refined, and Autodesk is now adopting the approach of offering different UIs to cater to different user personas, such as operators who might be more familiar with 2D floor plans than 3D.

Other features that have been added include the ability to use labels or floor plans to isolate them in the display, auto views to simplify navigation, asset property cards (which can appear in view, as opposed to bringing up the large party panel) and thresholds, which can be set to fire off alerts when unexpected behaviour is identified. Users can now create groups of assets and allocate them to concepts such as ‘by room’. Spaces can now also be drawn directly in Tandem.

Speed is also improved. As Tandem is database centric, not file based, it enables dynamic loading of geometry and data, leading to fast performance even with complex models. It also facilitates the ability to retain all historical data and easily integrate new data sources as the product grows. This is the way all design-related software will run. Tandem benefits from being conceived in this modern cloud era.

That said, development of Tandem has moved beyond simply collecting, filtering, tagging and visualising data to providing actionable insights and recommendations. From talking with Bob Bray, vice president and general manager of Autodesk Tandem and Tim Kelly, head of Tandem product strategy, the next big step for Tandem is to analyse the rich data collected to identify issues and suggest optimisations. These proactive insights would include potential cost savings and carbon footprint reduction through intelligent HVAC management based on actual occupancy data.

Systems tracing

Having dumb geometry in dumb spaces was pretty much the full extent of traditional CAFM. Digital twins can and should be way smarter. The systems tracing capability in Tandem simplifies the understanding of all the complex building systems and their spatial relationships, aiding operations, maintenance, and troubleshooting. By clicking on building system elements, you can see the connections between different elements within a building’s systems and see how networks of branches and zones relate to the physical spaces they serve and identify where critical components are located within the space. This means if something goes wrong, should that be discovered via IoT or reported by an occupant, systems tracing allows the issue to be pinpointed down to a specific level and room. Users can select a component like an air supply and then trace its connection down though subsystems to the spaces it serves.

Tandem is a cloud-based conduit, pooling information from multiple sources which is then refined by each user to give them insight into layers of spatial and telemetric data

Building in this connection between components to make a ‘system’, used to be a pretty manual process. Now, Tandem can automatically map the relationships between spaces and systems and use them for analysis to identify the root cause of problems. Timelines Data is valuable and BMS (Building Management Systems) and IoT sensors generate the building equivalent of an ‘ECG’ every couple of seconds. The historical, as well as the live data is incredibly valuable. Timelines in Tandem display this historic sensor data in a visual context. Kelly demonstrated an animated heatmap overlaid on the building model showing how temperature values fluctuate across a facility. It’s now possible to navigate back and forth through a defined period, either stepping through specific points or via animation, seeing changes to assets and spaces.

While the current implementation focuses on visualising historic data, Kelly mentioned the future possibility of the timeline being used to load or hide geometry based on changes over time, reflecting renovations or other physical alterations to the building.

Bray added that Tandem never deletes anything, implying that the historical data required for the timeline functionality is automatically retained within the system. This allows users to access and analyse past performance and conditions within the building at any point in the future, should that become a need.

Asset monitoring

Asset monitoring dashboards in Tandem are designed to provide users with a centralised view for monitoring the performance and status of their key assets. This feature, which is now in beta, aims to help operators identify issues and prioritise their actions. They will be customisable, and users can create dashboards to monitor the specific assets they care about This allows for a tailored overview of the most critical equipment and systems within their facility.

The dashboards will likely allow users to establish KPIs and tolerance thresholds for their assets. By setting these parameters, the system can accurately measure asset performance and identify when an asset is operating outside of expected or acceptable ranges with visual feedback of assets out of optimal performance.

Assets that are consistently operating out of tolerance or experiencing recurring issues can be grouped to aid focus e.g. by level, room, manufacturer. With this in mind, Tandem also has a ‘trend analysis’ capability, allowing users to identify potential future problems based on current performance patterns. The goal of these asset monitoring dashboards is to help drive preventative maintenance and planning for equipment replacement.

Tandem Connect

Digital Twin creation and connectivity to live information means there is a big integration story to tell and it’s different on nearly every implementation. Tandem is a cloud-based conduit, pooling information from multiple sources which is then refined by each user to give them insight into layers of spatial and telemetric data. To do that, Autodesk needed to have integration tools to tap into, or export out to, the established systems, should that be CAFM, IoT, BMS, BIM, CAD, databases etc.

Tandem Connect is designed to simplify that process and comes with prepacked integration solutions for a broad range of commonly used BMS. IoT and asset management tools. This is not to be confused with other developments such as Tandem APIs or SDKs.


Autodesk Tandem


The application was acquired and so has a different style of UI to other Autodesk products. Using a graphical front end, integrations can be initially plug and play, such as connecting to Microsoft Azure, through a graph interface. The core idea behind this is to ‘democratise the development of visual twins’ and not require a software engineer to get involved. However more esoteric connections may require some element of coding. Bray admitted there was significant ‘opportunity for consultancy’ that arises from the whole connectivity piece of the pie and that a few large system integrators were already talking with Autodesk about that opportunity.

Bray explained that Tandem Connect enables not only data inflow and outflow but also ‘workflow automation and data manipulation’. He gave an example where HVAC settings could be read into Tandem Connect, and a comfort index could be written, which was demonstrated at Autodesk University 2024.

Product roadmap

Autodesk keeps a product roadmap which has been pretty accurate to show the development of travel, given the regular video updates.

Two of the more interesting capabilities in development are portfolio optimisation and the development of more SDK options, plus the possibility of future integration of applications. Portfolio optimisation will allow users to view data of multiple facilities in one central location and should provide analytics to predict future events with suggested actions for streamlining operations.

Beyond the current Rest API (Now), Autodesk is developing a full JavaScript Tandem SDK to build custom applications that leverage Tandem’s logic and visual interactivity. In the long-term, Autodesk says it will possibly enable extensions for developers to include functionality within the Tandem application itself.

Conclusion

Tandem development continues relentlessly. The capabilities that are being added now are starting to get into the high value category. While refinements are always being added to the creation and filtering, once the data is in and tagged and intelligently put into systems, it’s then about deep integration, alerts for out of nominal operation at a granular level, historical analysis of systems, spaces and rooms, all with easy visual feedback and the potential for yet more data analysis and intelligence.

Bray uses a digital twin maturity model to outline the key stages of development needed to realise the full potential of digital twin technology. It starts with building a Descriptive Twin (as-built replica), then Informative Twin (granular operational data), then Predictive Twin (enabling predictive analytics), Comprehensive Twin (what-if simulation) and Autonomous Twin (self-tuning facilities).

At the moment, Tandem is crossing from Informative to Predictive, but the stated intent for higher level functionality is there. However the warning is, your digital twin is only ever as good as the quality of the data you have input.

Some of the early users of Tandem are now being highlighted by the company. In a recent webinar, Brendan Dillon, director of digital facilities & infrastructure, Denver International Airport gave a deep dive into how they integrated Maximo with Tandem to monitor facility operations.

Tandem is an Autodesk outlier. It’s not a volume product and it’s not something that Autodesk’s channel can easily sell. It’s an investment in a product development that is quite unusual at the company. It doesn’t necessarily map to the way Autodesk currently operates as, from my perspective, it’s really a consultancy sale, to a relatively small number of asset owners – unlike Bentley Systems, whose digital twin offerings often operate at national scale across sectors like road and rail. The good news is that Autodesk has a lot of customers, and they will be self-selecting potential Tandem customers, knowing they need to implement a digital twin strategy and probably have a good understanding of the arduous journey that may be. The Tandem team is trying to make that as easy as possible and clearly developing it out in the open brings a level of interaction with customers that in these days is to be commended.

Meanwhile, with its acquisition of niche products like Innovyze for hydraulic modelling, there are some indications that Autodesk is perhaps looking to cater to more involved engagements with big facility owners, and I see Tandem as falling into that category at the moment, while the broader twins market has still yet to be clearly identified.

The post Autodesk Tandem in 2025 appeared first on AEC Magazine.

]]>
https://aecmag.com/digital-twin/autodesk-tandem-in-2025/feed/ 0
Regarding digital twins https://aecmag.com/digital-twin/regarding-digital-twins/ https://aecmag.com/digital-twin/regarding-digital-twins/#disqus_thread Wed, 16 Apr 2025 05:00:29 +0000 https://aecmag.com/?p=23518 We spoke with the developer of Twinview to hear the latest on digital twins

The post Regarding digital twins appeared first on AEC Magazine.

]]>
AEC Magazine caught up with Rob Charlton, CEO of Newcastle’s Space Group to talk about digital twin adoption and advances. Twinview, created by the the company’s BIM Technologies spin off, is one of the most mature solutions on the market today and now has global customers

It’s tough being one of the first to enter a market but for Space, one of the country’s most BIM-centric architectural practices, it was a case of needs must. In 2016, its BIM consultancy spin-off, BIM Technologies, identified a need for its clients to be able to access their model data without the need for expensive software or hardware. Development started and this eventually became Twinview, launched in 2019.

Space Group is a practicing architecture firm, a BIM software developer, a services supplier, a BIM components / objects library creator and distributor. So, not only does it develop BIM software, it also uses the software in its own practice, as well as sell its solutions and services to other firms.


Find this article plus many more in the March / April 2025 Edition of AEC Magazine
👉 Subscribe FREE here 👈

Selling twins

In previous conversations with CEO Rob Charlton on the market’s appetite for digital twins, he has been frank in the difficulty in getting buy in from fellow architects, developers and even owner operators. The customers who got into twins early were firms that owned portfolios of buildings which were sold as eco-grade investments.

Charlton acknowledges that he always expected it to be a long-term endeavour, “We started this development knowing it was it was a five year plus journey to any level of maturity or even awareness”. He draws a parallel to the adoption of BIM, recalling that even though Space bought its first license of Revit around 2001, it didn’t gain significant traction until around 2011, and even then, this was largely due to UK BIM mandates.

The early digital twin market development was a ‘slow burn’. Charlton contrasts BIM Technologies’ patient, self-funded approach with companies that seek large VC funding, arguing that “ the market will move at the level it’s ready for”.

He explains that the good news is that over the last year, there has been an increase in awareness of the value
of digital twins, particularly in the last six months.

This awareness is seen in the fact that clients are now putting out Requests for Proposals (RFPs) for digital twin solutions. For Charlton, this is a fundamental difference compared to the past, where they would have to approach firms to explain the benefits of digital twins. Now, the clients themselves have made the decision that they want a digital twin and are seeking proposals from providers.

Priorities and needs

There’s a lot of talk about digital twins but very little talk concerning the actual benefits of investing in building them. Charlton explains a lot of twin clients are increasingly interested in reducing carbon in buildings, whether that be in embodied or operational and compliance and safety. “It’s an area that Space is particularly passionate about but there is an inconsistency in how embodied carbon reviews and measurements are conducted,” he says.
Customer access to operational data is also important, explains Charlton, “Clients want to gain insights into how their buildings are actually performing in real time.”

He also notes that the facilities and the integration with facilities management is equally important, to streamline maintenance, manage issues, and improve overall building operations.

Clients value the ability to have “access to their information in one place” adds Charlton. And here, the cloud is the perfect solution to deliver a unified platform which consolidates models, and documents related to building assets.

Twinview clients are especially interested in owning their own data. Charlton gives the example of a New Zealand archive project, explaining that the client was particularly interested in having Twinview to maintain independence when using a subcontractor or external service provider, which might come and go over the project lifetime.

Back in the UK, Twinview is being used in conjunction with ‘desk sensors’ on an NHS project to optimise space and potentially avoid unnecessary capital expenditure. Charlton explains that the client was finding the digital twin useful for “analysis on how the space is used” because they were seeking to validate or challenge space needs assessments by consultants.

Increasingly, contractual obligations include performance data. For one of Space’s school clients, the DFA Woodman Academy, there’s a contractual obligation to provide energy performance data at month, three months and 12-months. Digital twin technology facilitated the compliance goal within the performance-based contract. The IoT sensors also identified high levels of CO2 in the classrooms, prompting an investigation into the cause.
Twinview goes beyond the traditional digital twin model for operations and has been used to connect residents to live building information. On a residential project, tenants access the Twinview data on their mobile phones to see energy levels in the buildings, temperatures and CO2, all through their own app.

Artificial Intelligence

Everyone is talking about AI, and Twinview now features a ChatGPT-like front end. This enables plain language search within the digital twin, both at an asset level and with regards to performance data. Charlton explains that while the AI in Twinview has a ‘ChatGPT-like interface’, it is not directly ChatGPT, although it does connect to it. He explains that Twinview developed its own system. This is possibly due to the commercial costs associated with using ChatGPT for continuous queries. The AI in Twinview accesses all building information, including the model, operational data, and tickets, which are stored in a single bucket on AWS. Looking to the future, Charlton mentions that the next stage of AI development for Twinview will be focused on prediction and learning. This includes the ability to generate reports automatically (e.g. weekly on average CO2 levels), predict future energy usage, and suggest ways to improve building performance. A key differentiator for AI in Twinview in the future, will be in its capacity to understand correlations between disparate datasets that are often siloed, such as occupancy data, fire analysis, and energy consumption. By applying a GPT-like technology over this connected data, the aim is to uncover new insights and solutions.

Development Journey

From a slow burn start, despite being a relatively small UK business and competing with big software firms with
deep pockets, Charlton told us that Twinview had already won international clients and is currently being
shortlisted for other significant international projects, including one on the west coast of America, against international competition.


Screenshot

The post Regarding digital twins appeared first on AEC Magazine.

]]>
https://aecmag.com/digital-twin/regarding-digital-twins/feed/ 0
Higharc AI 3D BIM model from 2D sketch https://aecmag.com/bim/higharc-ai-3d-bim-model-from-2d-sketch/ https://aecmag.com/bim/higharc-ai-3d-bim-model-from-2d-sketch/#disqus_thread Wed, 16 Apr 2025 05:00:07 +0000 https://aecmag.com/?p=23466 A cloud-based design solution for US timber frame housing presents impressive new AI capabilities

The post Higharc AI 3D BIM model from 2D sketch appeared first on AEC Magazine.

]]>
In the emerging world of BIM 2.0, there will be generic new BIM tools and expert systems, dedicated to certain building types. Higharc is a cloud-based design solution for US timber frame housing. The company recently demonstrated impressive new AI capabilities

While AI is in full hype cycle and not a day passes without some grandiose AI claim, there are some press releases that raise the wizzened eyebrows at AEC Magazine HQ. North Carolina-based start-up, Higharc, has demonstrated a new AI capability which can automatically convert 2D hand sketches to 3D BIM models within its dedicated housing design system. This type of capability is something that several generic BIM developers are currently exploring in R&D.

Higharc AI, currently in beta, uses visual intelligence to auto-detect room boundaries and wall types by analysing architectural features sketched in plan. In a matter of minutes, the software then creates a correlated model comprising all the essential 3D elements that were identified in the drawing – doors, windows, and fixtures.


Find this article plus many more in the March / April 2025 Edition of AEC Magazine
👉 Subscribe FREE here 👈

Everything is fully integrated with Higharc’s existing auto-drafting, estimating, and sales tools, so that construction documents, take-offs, and marketing collateral can be automatically generated once the design work is complete.

In one of the demonstrations we have seen, a 2D sketch of a second floor is imported, analysed and then automatically generates all the sketched rooms and doors, with interior and exterior walls and windows. The AI generated layout even means the roof design adapts accordingly. Higharc AI is now available via a beta program to select customers.



Marc Minor, CEO and co-founder of Higharc explains the driving force behind Higharc AI. “Every year, designers across the US waste weeks or months in decades-old CAD software just to get to a usable 3D model for a home,” he says.

“Higharc AI changes that. For the first time, generative AI has been successfully applied to BIM, eliminating the gap between hand sketches and web-based 3D models. We’re working to radically accelerate the home design process so that better homes can be built more affordably.”

AI demo

In the short video provided by Higharc, we can see a hand drawn sketch imported into the Autolayout tool. The sketch is a plan view of a second floor, with bedrooms, bathrooms and stairs with walls, doors and windows indicated. There are some rough area dimensions and handwritten notes, denoting room allocation type. The image is then analysed. The result is an opaque overlay, with each room (space) tagged appropriately, and a confirmation of how many rooms it found. There are settings for rectangle tolerance, minimum room areas. The next phase is to generate the rooms from this space plan.

We now switch to Higharc’s real-time rendered, modelling and drawing environment, where each room is inserted on the second floor of an existing single floor residential BIM model, where walls, windows, doors and stairs are added, and materials are applied. This is simultaneously referencing an image of the sketch. The accurate BIM model has been created, combining traditional modelling with AI sketch-to-BIM generation.

What is Higharc?

Founded in 2018, Higharc develops a tailored cloud-based BIM platform, specifically designed to automate and integrate the US housing market, streamlining the whole process of design, sales, and constructing new homes.

Higharc is a service sold to home builders, that provides a tailored solution which integrates 3D parametric modelling, the auto creation of drawings, 3D visualisations, material quantities and costing estimates, related construction documents and planning permit application. AEC Magazine looked at the development back in 2022.

The company’s founders, some of which were ex-Autodesk employees, recognised that there needed to be new cloud-based BIM tools and felt the US housing market offered a greenfield opportunity, as most of the developers and construction firms in this space had completely avoided the BIM revolution, and were still tied to CAD and 2D processes. With this new concept Higharc offered construction firms easy to learn design tools, which even prospective house buyers could use to design their dream homes. As the Higharc software models every plank and timber frame, accurate quantities can be connected to ERP systems for immediate and detailed pricing for every modification to the design.

The company claims its technology enhances efficiency, accelerating a builder’s time to market by two to three times, reducing the timeline for designing and launching new plots by 75% (approximately 90 days). Higharc also claims that plan designs and updates are carried out 100 times faster than with traditional 2D CAD software.

To date, Higharc has raised $80 million and has attracted significant investment and support from firms such as Home Depot Ventures, Standard Investments, and former Autodesk CEO Carl Bass. The company has managed to gain traction in the US market and is being used to build over 40,000 homes annually, representing $19 billion in new home sales volume.

While Higharc’s first go to market was established house building firms, the company has used money raised to expand its reach to address those who want to design and build their own homes. The investment by Home Depot would also indicate that the system will integrate with the popular local building merchants, so selfbuilders can get access to more generic material supply information. The company also plans to extend the building types it can design, eventually adding retail and office to its residential origins.

In conversation

After the launch, AEC Magazine caught up with co-founder Michael Bergin and company CEO Marc Minor to dig a little deeper into the origin of the AI tool and how it’s being used. We discovered that this is probably the most useful AI introduction in any BIM solution we have seen to date, as it actually solves a real world problem – not just a nice to have of demoware.

The only reason we were able to do it, is because of what Higharc is in the first place. It’s a data-first BIM system, built for the web from the ground up

In previous conversations with Higharc, it became apparent that the company had become successful, almost too successful, as onboarding new clients to the system was a bottleneck. Obviously, every house builder has different styles and capabilities which have to be captured and encoded in Higharc but there was also the issue of digital skill sets. Typically, firms that were opting to use Higharc were not traditional BIM firms – they were housebuilders, more likely to use AutoCAD or a hand drawn sketch, than have much understanding of BIM or modelling concepts. It turns out that the AI sketch tool originated out of a need to include the non-digital, but highly experienced, house building workforce.


Mark Minor: The sketch we used to illustrate at launch is a real one, from one of our customers. We have a client, a very large builder in Texas who builds 4,000 houses per year just in Texas. They have a team of 45 or so designers and drafters, and they have a process that’s very traditional. They start on drawings boards, just sketching. They spend three months or so in conceptual design and eventually they’ll pass on their sketches to another guy who works on the computer, where he models in SketchUp, so they can do virtual prototype walk-throughs to really understand the building, the design choices, and then make changes to it.

The challenge here is that it takes a long time to go back and forth. We showed them this new AI sketch to model work we were doing, and they gave us one of their sketches for one of their homes that they’re working on. The results blew their mind. They said for them ‘this is huge’. They told us they can cut weeks or months from their conceptual stage and probably bring in more folks at the prototype walk-through stage. It’s a whole new way of interacting with design.

What makes this so special, and is the only reason we were able to do it, is because of what Higharc is in the first place. It’s a data-first BIM system, built for the web from the ground up. Because it’s data first, it means that we can not only generate a whole lot of synthetic data for training rapidly, but we really have a great target for a system like this – taking a sketch and trying to create something meaningful out of the sketch. It’s essentially trying to transform the sketch into our data model. And when you do that, you get all the other features and benefits of the Higharc system right on top of it.


Martyn Day: As the software processes the file, it seems to go through several stages. Is the first form finding?

Mark Minor: It’s not just form finding, actually, it’s mapping the rooms to particular data types. And those types carry with them all kinds of rules and settings.

Michael Bergin: At the conceptual / sketch design phase these are approximate dimensions. Once you’ve converted the rooms into Higharc, the model is extremely flexible. You can stretch all the rooms, you can scale them, and everything will replace itself and update automatically. We also have a grid resolution setting, so the sketch could even be a bubble diagram, or very rough lines, and you just set the grid resolution to be quite high, and you can still get a model out of that.

Higharc contains procedural logic, as to how windows are placed, how the foundation is placed, the relationships between the rooms. So the interaction that you see as the AI processes the sketch and makes the model, places the window, doors and the spaces between the rooms, that is all coming from rules that relate to the specifications for our builder.


Martyn Day: If doors collide, or designs do not comply with local codes, do you get alerted if you transgress some kind of design rule?

Michael Bergin: We have about 1,000 settings in Higharc that relate to the building that are to adjust for and align to issues of code compliance. When you get into automated rule checking, evaluating and digesting code rules and then applying that to the model, we have produced some exciting results in more of a research phase in that direction. There’s certainly lots of opportunities to express design logic and design rules, and we’ll continue to develop in that direction.

Mark Minor: One of the ways we use this, is we go to a home builder we want as a customer. In advance of having a sales chat, we’ll actually go to their website and screenshot one of their floor plans. We’ll pull it the AI tool and set it up as the house. We want to help folks understand that it’s not as painful and as hard as you might think. The whole BIM revolution happened in commercial, that’s kind of what’s happening in home building now. But 90% or more of all home builders use AutoCAD. We rarely come across Revit.


Martyn Day: I can see how you can bring non-digital housebuilders into the model creation side of things, where before everything would be handled by the computer expert. With this AI tool, does that mean suddenly everyone can contribute to the Higharc model?

Michael Bergin: Yes! That’s extremely important to us, bringing more of the business into the realm of the design, that’s really the core of our business. How do we bring the purchasing and the estimating user into the process of design? How do we take the operations user who’s scheduling all of the work to be done on the home into the design, because ultimately, they all have feedback. The sales people have feedback. The field team have feedback, but they’re all blocked out. They are always working through an intermediary, and perhaps through an email to a CAD operator. It goes into a backlog. We are cutting that distance between all the stakeholders in the design process and the artefact of the design has driven a lot of our development.

It’s exciting to see them engaging in the process, to see new opportunities opening up for them, which I think is broadly a great positive aspect of what’s happening with the AI revolution.


Martyn Day: You have focused on converting raster images, which is hard, as opposed to vector. But could you work with vector drawings?

Michael Bergin: While it would have been easier to use a vector representation to do the same AI conversion work, the reason that we did focus on raster was that vector would have been quite limiting. It would have blocked us out from using conceptual representations. If our customers are using a digital tool at all, they are building sketches in something like Figjam. In this early conceptual design stage, we have not seen the Rayon tools or really any of the new class of tools that the market is opening up for. Our market in US home builders tends to be the way that they’ve been doing things for some decades, and it works well for them, and we are fortunate that they have determined that Higharc is the right tool for their business.

Making it possible that the businesses process can change has required us to develop a lot of capabilities like integrating with the purchasing and estimation suite, integrating with the sales team, integrating with ERPs, really mirroring their business. Otherwise, I don’t think that we would have an excellent case for adoption of new tools in this industry.

The post Higharc AI 3D BIM model from 2D sketch appeared first on AEC Magazine.

]]>
https://aecmag.com/bim/higharc-ai-3d-bim-model-from-2d-sketch/feed/ 0
Polycam for AEC https://aecmag.com/reality-capture-modelling/polycam-for-aec/ https://aecmag.com/reality-capture-modelling/polycam-for-aec/#disqus_thread Wed, 16 Apr 2025 05:00:21 +0000 https://aecmag.com/?p=23369 Blending iPhone LIDAR with photogrammetry, this reality capture startup is now targeting the AEC sector

The post Polycam for AEC appeared first on AEC Magazine.

]]>
Reality capture devices are usually either high-cost laser scanners or affordable photogrammetry via drones or phones. Polycam, blending iPhone LIDAR with photogrammetry, is now aiming at the professional AEC market. Martyn Day reports

Precise reality capture has come a long way. We are in the process of moving from rare and expensive to cheap and ubiquitous. Laser scanning manufacturers are currently holding their price points and margins, but technology and mobility are closing in from the consumer end of the market. Matterport recently launched a low-cost laser scanner combined with photogrammetry, and Polycam, a developer of smartphone-based reality capture software for consumers, is looking to sell up to the professional market.

Polycam can be used to quickly document existing conditions (as-builts), measure spaces, and generate floor plans. The latest release looks to dig deeper into AEC workflows. The app is available for iOS and Android and makes use of the iPhone’s built in LiDAR and cameras to capture interiors and exteriors when using footage from a drone. The software also supports Gaussian Splats to achieve high-resolution 3D capture. While the product has proved incredibly popular, the firm is looking to move into new areas of AEC, such as interior design, structural, construction inspection and facilities management.


Find this article plus many more in the March / April 2025 Edition of AEC Magazine
👉 Subscribe FREE here 👈

The company

Polycam was founded four years ago by Chris Hinrich and Elliot Spellman. Their initial aim was to build software that could deliver the power of 3D capture to users of smartphones.

Before Polycam, the pair worked at a company which was developing a ‘3D Instagram’ that processed uploaded images on a server for photogrammetry. This was a bottleneck. The pair left the company and set up Polycam. The big innovation was the fact that you could process the 3D creation fast, on device.

With over half the Fortune 500 companies actively using Polycam and well over 100,000 paying users, the firm has been able to raise over $22 million in 2024 in investment, based on revenues of $6.5 million in 2023. One of the core areas that showed regular growth was in their AEC user-base. The latest release focuses on providing tools for the growing base of AEC customers.


Polycam

New features

Polycam supports Apple’s AR toolkit, allowing for easier and more accurate model creation by recognising walls, doors, and windows. I have used Polycam on my iPhone and compared it to a Leica Disto and have found the accuracy to be within a few millimetres when scanning a room. This makes it suitable for schematic designs and perhaps material ordering (though precise cuts might still require manual measurements). The platform supports multifloor scanning, to build a model very similar to that of Matterport.

While an automated scan-to-BIM workflow is seen as the aim, Polycam offers a service where users can order professional-grade 3D files that are then converted into CAD (AutoCAD) and BIM (Revit) files – but with a human-in-the-loop, through a collaboration with Transform Engine. This provides a higher quality and more detailed BIM output than automatic processing currently offers. AutoCAD layouts start at $95 and Revit models $200. Furthermore, Polycam has plans to add IFC (Industry Foundation Classes) file export, which will make it easier for users to create their own models.

That said, Polycam does instantly generate customisable 2D floor plans from its scans. These floor plans can be tweaked within the app for business and enterprise tiers, allowing for adjustments to wall thickness, colours, and labels.

Complex geometry can fool the application. I found that accurately capturing ceilings with multiple levels and stairs, resulted in gaps in the models

There’s a new AI Property Report, which automatically generates PDFs and includes the floor plan along with information such as the number of bedrooms and bathrooms, floor area, total wall area, and a room-by-room breakdown with measurements. This could be used for insurance or costing and ordering materials. The AI automatically derives room classifications by detecting objects like beds (for bedrooms) and appliances (for kitchens).


Polycam


The new Scene Editor allows multiple scans to be combined, including both interior captures and drone footage, into a single, unified 3D scene. This provides a holistic view of a property or project site, enabling users to navigate and analyse the entire space. Using layers, it’s possible to filter scenes and control the visibility of different parts of a capture.

The platform also has new collaboration and sync tools that allow users to add comments and start threaded conversations within a scanned space, facilitating review processes for architects and other stakeholders. The cross-platform nature of Polycam ensures that teams can access and share this data across various remote devices.

3D Generator

The latest version offers a quick way of making 3D components for a library, from real world objects like a chair, starting from an image or a prompt, describing the details of the object you would like to create. This isn’t just the geometry, but the materials used too. These 3D objects can be placed in the real-world scans, enabling users to visualise and design spaces with custom virtual objects.

Limitations

Because everything is on device and there is no option for cloud or serverbased processing, there comes a natural limit. On-device memory is also a constraint. Polycam recommends a horizontal size limit of around 279 sq metres for a single scan, to ensure a decent result. Beyond this, the app might require compromises to process quickly without running out of memory. While the new scene editor addresses combining multiple scans, individual scans still have practical size limits.

Complex geometry can fool the application. I found that accurately capturing ceilings with multiple levels and stairs, resulted in gaps in the models. While the technology has improved, complex or non-planar geometry in older buildings might still present some challenges.


Polycam


While Polycam is accurate enough for schematic designs and potentially ordering bulk materials (the company claims within 2% compared to expensive LiDAR scanners), it might not be sufficient for tasks requiring very high precision, such as cutting kitchen cabinets, which may still necessitate manual measurements. Also while using the AR Toolkit object recognition the spatial reports is not totally foolproof and may require users to manually override classifications if they are incorrect.

Polycam seems to have approached the market more aimed at construction and its use in the American market. While this is predominantly 2D, the BIM side of the product has a lot yet to be delivered connecting the data on device to BIM software. Scan-to-BIM still requires the cost and eye of a human to properly check the conversion. This has to be compared to having a professional survey and the legal indemnity that it provides. Would I use Polycam on a house? Hell yes! Would I use it on a major airport refurbishment? Only as a quick rough.

Conclusion

Polycam is certainly on the right path with its concentration of development of instant 2D floor plan generation and measurements, as well as building 3D models for AEC users. AR Toolkit’s intelligence always seems like magic when scanning a room. However, the software and service does have limitations with obvious omissions and the need for closer integration with AEC workflows. Surely, we can’t be too far away from not requiring a human in the loop to create reliable results from scan to BIM?

Size matters. While the possibility of real-time streaming of large-scale scans is a compelling idea for future development, the current focus of Polycam appears to be on enhancing on-device processing and providing relevance to the AEC industry. The planned addition of features like IFC export and improved BIM workflows indicates a clear direction towards serving the professional needs of architects, engineers, and construction professionals.

Despite these limitations, the monthly use cost is $17 per user (Pro) and $34 per user (Business level). At those prices, it’s an application that many in the industry might well use regularly, when onsite vs the alternative. This is like having a budget Matterport scanner in your pocket.

The ongoing development and the specific features being introduced demonstrate a clear trajectory towards making Polycam a better fit for AEC professionals, especially surveyors and architects, particularly for initial site assessment, as-built documentation, schematic design, and collaboration.

The post Polycam for AEC appeared first on AEC Magazine.

]]>
https://aecmag.com/reality-capture-modelling/polycam-for-aec/feed/ 0
Motif to take on Revit: exclusive interview https://aecmag.com/bim/motif-to-take-on-revit-exclusive-interview/ https://aecmag.com/bim/motif-to-take-on-revit-exclusive-interview/#disqus_thread Fri, 07 Feb 2025 07:03:35 +0000 https://aecmag.com/?p=22472 BIM startup is led by former Autodesk co-CEO Amar Hanspal and backed by a whopping $46 million in funding

The post Motif to take on Revit: exclusive interview appeared first on AEC Magazine.

]]>
BIM startup Motif has just emerged from stealth, aiming to take on Revit and provide holistic solutions to the fractured AEC industry. Led by former Autodesk co-CEO Amar Hanspal and backed by a whopping $46 million in funding, Motif stands out in a crowded field. In an exclusive interview, Martyn Day explores its potential impact.

The race to challenge Autodesk Revit with next-generation BIM tools has intensified with the launch of Motif, a startup that has just emerged out of stealth. Motif joins other startups including Arcol, Qonic, and Snaptrude, who are already on steady development paths to tackle collaborative BIM. However, like any newcomer competing with a well-established incumbent, it will take years to achieve full feature parity. This is even the case for Autodesk’s next generation cloud-based AEC technology, Forma.

What all these new tools can do quickly, is bring new ideas and capabilities into existing Revit (RVT) AEC workflows. This year, we’re beginning to see this happening across the developer community, a topic that will be discussed in great detail at our NXT BLD and NXT DEV conferences on 11 and 12 June 2025 at the Queen Elizabeth II Centre in London.

Though a late entrant to the market, Motif stands out. It’s led by Amar Hanspal and Brian Mathews, two former Autodesk executives who played pivotal roles in shaping Autodesk’s product development portfolio.

Hanspal was Autodesk CPO and, for a while, joint CEO. Mathews was Autodesk VP platform engineering / Autodesk Labs and lead the industry’s charge into adopting reality capture. They know where the bodies are buried and have decades of experience in software ideation, running large teams and have immediate global networks with leading design IT directors. Their proven track record also makes it easier for them to raise capital and be taken as a serious contender from the get-go.


Further reading – Motif V1: our first thoughts

 


Motif

In late January, the company had its official launch alongside key VC investors. Motif secured $46 million in seed and Series A funding. The Series A round was led by CapitalG, Alphabet’s independent growth fund, while the seed round was led by Redpoint Ventures. Pre-seed venture firm Baukunst also participated in both rounds. This makes Motif the second largest funded start-up in the ‘BIM’ space – the biggest being HighArc, a cloud-based expert system for US homebuilders, at $80 million.

While Motif has been in stealth for almost two years, operating under the name AmBr (we are guessing for Amar and Brian). Major global architecture firms have been involved in shaping the development of the software, even before any code was written, all under strict NDAs (Non-disclosure Agreements).

The firms working with Hanspal’s team deliver the most geometrically complex and large projects. The core idea is that by tackling the needs of signature architectural practices, the software should deliver more than enough capability for those who focus on more traditional, low risk designs.

There is considerable appetite to replace the existing industry standard software tools. This hunger has been expressed in multiple ‘Open Letters to Autodesk’, based on a wish for more capable BIM tools – a zeitgeist which Motif is looking to harness, as BIM eventually becomes a replacement market.

The challenge

Motif’s mission is to modernise the AEC software industry, which it sees as being dominated by ‘outdated 20th-century technology’. Motif aims to create a next-generation platform for building design, integrating 3D, cloud, and machine learning technologies. Challenges such as climate resilience, rapid urbanisation modelling, and working with globally distributed teams will be addressed, and the company’s solutions will integrate smart building technology.

Motif will fuse 3D, cloud, and AI with support for open data standards within a real-time collaborative platform, featuring deep automation. The unified database will be granular, enabling sharing at the element level. This, in many ways follows the developments of other BIM start-ups such as Snaptrude and Arcol, which pitch themselves as the ‘Figma’ for BIM. In fact, Hanspal was an early investor in Arcol, alongside Procore’s Tooey Courtemanche.

At the moment, there is no software for the public to see, just some hints of the possible interface on the company’s website. Access is request only. AEC Magazine is not privy to any product demonstrations, only what we have gleamed through conversations with Motif employees. The launch provided us with an exclusive interview with Hanspal to discuss the company, the technology and what the BIM industry needs.

A quantum of history

Before we dive into the interview, let’s have a quick look at how we got here. At Autodesk University 2016, while serving as Autodesk’s joint CEO, Hanspal introduced his bold vision for the future of BIM. Called Project Quantum, the aim was to create a new platform that would move BIM workflows to the cloud, providing a common data environment (CDE) for collaborative working.

Hanspal aimed to address problems which were endemic in the industry, arising from the federated nature of Architecture, Engineering, and Construction (AEC) processes and how software, up to that point, doubled down on this problem by storing data in unconnected silos.

Instead of focusing on rewriting or regenerating Revit as a desktop application, the vision was to create a cloud-based environment to enable different professionals to work on the same project data, but with different views and tools, all connected through the Quantum platform.


Advertisement

Quantum would feature connecting workspaces, breaking down the monolithic structure of typical AEC solutions. This would allow data and logic to be accessible anywhere on the network and available on demand, in the appropriate application for a given task. These workspaces were to be based on professional definitions, providing architects, structural engineers, MEP (Mechanical, Electrical, and Plumbing) professionals, fabricators, and contractors with access to the specific tools they need.

Hanspal recognised that interoperability was a big problem, and any new solution needed to facilitate interoperability between different software systems, acting as a broker, moving data between different data silos. One of the key aspects of Quantum was that the data would be granular, so instead of sharing entire models, Quantum could transport just the components required. This would mean users receive only the information pertinent to their task, without the “noise” of unnecessary data.

Eight months later, the Autodesk board elected fellow joint CEO, Andrew Anagnost as Autodesk CEO and Hanspal left Autodesk. Meanwhile, the concept of Quantum lived on and development teams continued exploratory work under Jim Awe, Autodesk’s chief software architect.

Months turned into years and by 2019, Project Quantum had been rebranded Project Plasma, as the underlying technology was seen as a much broader company-wide effort to build a cloud-based data-centric approach to design data . Ultimately, Autodesk acquired Spacemaker in 2020 and assigned its team to develop the technology into Autodesk Forma, which launched in 2023—more than six years after Hanspal first introduced the Quantum concept.

However, Forma is still at the conceptual stage, with Revit continuing to be the desktop BIM workflow, with all its underlying issues.

In many respects, Hanspal predicted the future for next generation BIM in his 2016 Autodesk University address. Up until that point Autodesk had wrestled for years with cloud-based design tools, with its first test being Mechanical CAD (MCAD) software, Autodesk Fusion, which demoed in 2009 and shipped in 2013. Cloud-based design applications were a tad ahead of the web standards and infrastructure which have helped product like Figma make an impact.


Advertisement

In conversation

On leaving Autodesk in 2017, after his 15+ year stint, Hanspal thought long and hard about what to do next. In various conversations over the years, he admitted that the most obvious software demand was for a new modern-coded BIM tool, as he had proposed in some detail with Quantum. However, Hanspal was mindful that it might be seen as sour grapes. Plus, developing a true Revit competitor came with a steep price tag—he estimated it would take over $200 million. Instead, Hanspal opted to start Bright Machines, a company which delivers the scalable automation of robot modules with control software which uses computer vision machine learning to manufacture small goods, like electronics.

After almost four years at Bright Machines, in 2021, Hanspal exited and returned to the AEC problem, which, in the meantime, had not made any progress. During COVID, AEC Magazine was talking with some very early start-ups, and pretty much all had been in contact with Hanspal for advice and/or stewardship.


Martyn Day: Your approach to the market isn’t a single-platform approach, like Revit?

Amar Hanspal: In contrast to the monolithic approach of applications like Revit, we aim to target specific issues and workflows. There will be common elements. With the cloud, you build a common back end, but the idea is that you solve specific problems along the way. You only need one user management system, one payment system, collaboration etc. There are some technology layers that are common. But the idea is about solving end-user problems like design review, modelling, editing, QA, QC.

This isn’t a secret! I talked about this in the Quantum thing seven years ago! I always say ideas are not unique. Execution is. When it comes down to it, can anybody else do this? Of course they can. Will they do this? Of course not!


The current Motif website

Martyn Day: Data storage and flow is a core differential from BIM 2.0. Will your system use granular data, and how will you bypass limitations of browser-based applications. You talk about ‘open’, which is very in vogue. Does that mean that your core database is Industry Foundation Classes (IFC), or is there a proprietary database?

Amar Hanspal: There are three things we have to figure out. One how to run in a browser, where you have the limited memory, so you can’t just send everything. You’ve got to get really clever about how to figure out what [data] people receive – and there’s all sorts of modern ways of doing that.

Second is you have to be open from the get-go. However we store the data, anybody should be able to access it, from day one.

And then the third thing is, you can’t assume that you have all the data, so you have to be able to link to other sources and integrate where it makes sense. If it’s a Revit object, you should be able to handle it but if it’s not, you should be able to link to it.

You have to do some things for performance – it’s not proprietary, but you’re always doing something to speed up your user experience. The one path is, here’s your client, then you have to get data fast to them, and you have to do that in a very clever way, all while you’re encrypting and decrypting it. That’s just for user experience and performance, but from a customer perspective, anytime you want to interrogate the data send and request all the objects in the database – there is a very standard web API that you can use, and it’s always available.

Of course we’ll support IFC, just like we support RVT and all these formats. But that’s not connected, not our core data format. Our core data format is a lot looser, because we realised in this industry, it’s not just geometric objects you’re dealing with, you must deal with materials, and all sorts of data types. In some ways, you must try and make it more like the internet in a way. Brian [Mathews] would explain that the internet is this kind of weirdly structured yet linked data, all at the same time. And I think that’s what we are figuring out how to do well.


Advertisement

Martyn Day: We have seen all sorts of applications now being developed for the web. Some are thick clients with a 20 GB download – basically a desktop application running in a web browser, utilising all the local compute, with the data on the cloud. Some are completely on the cloud with little resource requirement on the local machine. Autodesk did a lot of experimentation to try and work out the best balance. What are you doing?

Amar Hanspal:  It’s a bit of a moving edge right now. I would say that you want to begin first principles. You want to get the client as thin as possible so that if you can, you avoid the big download at all costs. That can be through trickery, it’s also where WebGPU and all these new things that are showing up are helping. You can start using browsers for more and more [things] every day that will help deliver applications. But I do think that there are situations in which the browser is going to get overwhelmed, in which case, you’re going to require people to add something. Like, when the objects get really large and very graphical, sometimes you can deliver a better user experience if you give somebody a thicker client.  I think that’s some way off for us to try and deal with, but our first principle is to just leverage the browser as much as possible and not require users to download something to use our application. I think it may become, ‘you hit this wall for this particular capability’, then you’ll need to add something local.


Martyn Day: You have folks that have worked on Revit in your team. Will this help your RVT ability form the get go?

Amar Hanspal: We’ve not reverse engineered the file format, but, you know, we do know how this works. We’re staying good citizens and will play nice. We’re not doing any hacks, we’re going to integrate very cleanly with whatever – Revit, Rhino, other things that people use – in a very clean way. We’re doing it in an intelligent way, to understand how these things are constructed.


Martyn Day: The big issue is that Revit is designed to predominantly model, in order to produce drawings. Many firms are fed up with documentation and modelling to produce low level of detail output. Are you looking to go beyond the BIM 1.0 paradigm?

Amar Hanspal: Yes, fabrication is very critical for modular construction. Fabrication is really one of the things that you have to ‘rethink’ in some way. It’s probably the most obvious other thing that you have to do. I also think that there are other experiences coming out, not that we are an AR/VR play, but you’re creating other sorts of experiences, and deliverables that people want like. We need to think through that more expansively.


Amar Hanspal sharing his vast experience in software development at AEC Magazine’s NXT DEV conference. (Click the image to watch the vide


Martyn Day: Are you using a solid modelling engine underneath, like Qonic?

Amar Hanspal: Yes, there is an answer to that, but what we’re coming out with first, won’t need all that complexity, but yeah, of course, we will do all that stuff over time.  There is a mixture of tech that we can use – off the shelf – like license one or use something that is relatively open source.


Martyn Day: Most firms who have entered this space, taking on Revit, is the software equivalent of scaling the North face of the Eiger – 20 years of development, multidiscipline, broadly adopted. All of the new tools initially look like SketchUp, as there’s so much to develop. Some have focused on one area, like conceptual, others have opted to develop all over the place to have broad, but shallow functionality. Are you coming to market focussing on a sweet spot?

Amar Hanspal:  One of the things we learned from speaking to customers is that [in] this whole concept modelling / Skema / TestFit world there are so many things that developers are doing. We’re going after a different problem set. In some ways, the first thing that we’re doing will feel much more like a companion, collaboration product, and it will look like a creation thing. I don’t want to take anything out of market that feels half incomplete. The lessons we’ve learned from everything is that even to do the MVP (Minimum Viable Product) in modelling, we will be just one of sixteen things that people are using. I think, you know, I’d much rather go up to the North face and scale it.



Martyn Day: Many of the original letter writers were signature architects, complaining that they couldn’t model the geometry in Revit so used Rhino / Grasshopper then dropped the geometry into Revit. So, are you talking to the most demanding group of users to please?

Amar Hanspal:  I 100% agree with you. I think someone has to go up the North face of the Eiger. That’s my thing, it’s the hardest thing to do. It’s why we need this special team. It’s why we need this big capital. That’s why Brian and I decided to do it. I was thinking, who else is going to do it? Autodesk isn’t doing it! This Forma stuff isn’t really leading to the reinvention of Revit.

All these small developers that are showing up, are going to the East face. I give them credit. I’m not dissing them, but if they’re not going to scale the North face… I’m like, OK, this is hard, but we have got to go up the North face of the Eiger, and that’s what we’re going to do.

It’s like Onshape [cloud-based MCAD software] took ten years. Autodesk Fusion took ten years. And this might take us ten years to do it – I don’t think it will. So, what you will see from us – and maybe you might even criticise us for – is while we’re scaling, it’s going to look like little, tiny subsets coming out. But there’s no escaping the route we have to go.


Advertisement

Martyn Day: From talking with other developers, it looks like it will take five years to be feature comparative. The problem is products come to the market and aren’t fleshed out, they get evaluated and dismissed because they look like SketchUp, not a Revit replacement and it’s hard to get the market’s attention again after that.

Amar Hanspal:  Yeah, I think it’s five years. And that’s why, deliberately, the first product that’s going to come out is not going to be the editor. It’s going to look a little bit more Revizto-like because I think that’s what gives us time to go do the big thing. If you’re gonna come for the King, you better not miss. We’ve got to get to that threshold where somebody looks at it and goes, ‘It doesn’t do 100% but it does 50% or 60%’ or I can do these projects on it and that’s where we are – it’s why we’re working [with] these big guys to keep us honest. When they tell us they can really use this, then we open it up to everybody else. Up until then, we’ll do this other thing that is not a concept modeller but will feel useful.


Martyn Day: How many people are in the team now?

Amar Hanspal:  We’re getting 35 plus. I think we’re getting close to 40. It’s mostly engineering people. Up until two weeks ago, it was 32 engineers and myself. Now I have one sales guy, one marketing, so we’ll have a little bit of go to market. But it’s mainly all product people. We are a distributed company, based around Boston, New York or the Bay Area – that’s our core.

We’re constructing the team with three basic capabilities. There’s classic geometry, folks – and these are the usual suspects. The place where we have newer talent is on the cloud side, both on trying to do 3D on the browser front end, and then on the back-end side, when we’re talking about the data structures. None of those people come from CAD companies, none of them, they are all Twitter, Uber or robotics companies – different universes to traditional CAD.

The third skill set that we’re developing is machine learning. Again, none of those guys are coming from Cloud or 3D companies. These are research-focused, coming from first principles, that kind of focus.



Martyn Day: By trying to rethink BIM and being heavily influenced by what came before, like Revit, is there a danger of being constrained by past concepts? Somone described Revit to me as 70s thinking in 80s programming. Obviously now computer science, processors, the cloud have all moved on. The same goes for business models. This weekend, I watched the CEO of Microsoft say SaaS was dead!

Amar Hanspal:  We know we’re living in a post subscription world. Post ‘named user’ world is the way I would describe it. The problem with subscription right now, is that it’s all named user, you’ve got to be onboard, and then this token model at Autodesk is if you use the product for 30 seconds, then you get charged for the whole day.

It’s still very tied to, sort of like a human being in front in a chair. That’s what makes the change. Now, what does that end up looking like? You know the prevalent model, there’s three that are getting a lot of interest: one is the Open AI ChatGPT model. It’s get a subscription, you get a bunch of tokens. You exceed them, you get more.

The other one, which I don’t think works in AEC, is outcome-based pricing, which works for callcentres. You close a call, you create seven bucks for the software. I don’t see that happening. What’s the equivalent in AEC time? Produce drawing, seven bucks? What is the equivalent of that? That just seems wrong. I think we’re going to end up in this somewhat hybrid tokenised / ChatGPT style model, but you know we have to figure that out. We have to account for people’s ability to flex up and down. They have work what comes in and out. Yeah, that’s the weakness of the subscription business model, is that customers are just stuck.


Martyn Day: Why didn’t Autodesk redevelop Revit in the 2010 to 2015?

Amar Hanspal:  What I remember of those days – it’s been a while – is I think there was a lot of focus on just trying to finish off Revit Structure and MEP. I think that was the one Revit idea, and then suites and subscriptions. There was so much focus on business models on that. But you’re right. I think looking back, that was the time we should have have redone Revit. I started to it with Quantum, but I didn’t last long enough to be able to do it!


Conclusion

One could argue that the decision by Autodesk not to rewrite Revit and minimise the development was a great move, profit-wise. For the last eight years, Revit sales haven’t slowed down and copies are still flying off the shelves. Revit is a mature product with millions of trained users and RVT is the lingua franca of the AEC world, as defined in many contracts. There is proof to the argument that software is sticky and there’s plenty of time with that sticky grip, for Autodesk to flesh out and build its Forma cloud strategy.

Autodesk has taken active interest in the start ups that have appeared, even letting Snaptrude exhibit at Autodesk University, while it assesses the threat and considers investing in or buying useful teams and tech. If there is one thing Autodesk has, it’s deep pockets and throughout its history has bought each subsequent replacement BIM technology – from Architectural Desktop (ADT) to Revit. Forma would have been the first in-house development, although I guess that’s partially come out of the SpaceMaker acquisition.

But this isn’t the whole story. With Revit, it’s not just that the software that is old, or the files are big, or that the Autodesk team has given up on delivering major new productivity benefits. From talking with firms there’s an almost allergic reaction to the business model, coupled with the threat of compliance audits, added to the perceived lack of product development. In the 35+ years of doing this, it’s still odd seeing Autodesk customers inviting in BIM start-ups to try and help the competitive products become match-fit in order to provide real productivity benefits – and this has been happening for two years.

With Hanspal now throwing his hat officially in the ring, it feels like something has changed, without anything changing. The BIM 2.0 movement now has more gravitas, adding momentum to the idea that cloud-based collaborative workflows are now inevitable.  This is not to take anything away from Arcol, Snaptrude and Qonic which are possibly years ahead of Motif, having already delivered products to market, with much more to come.

From our conversation with Hanspal, we have an indication of what Motif will be developing without any real physical proof of concept. We know it has substantial backing from major VCs and this all adds to the general assessment that Revit and BIM is ripe for the taking.

At this moment in the AEC space, trying to do a full-frontal assault of the Revit installed-base, is like climbing North Face of the Eiger – you better take a mighty big run up and have plenty of reserves. And, for a long time, it’s going to look like you are going nowhere. Here, Motif is playing its cards close to its chest, unlike the other start-ups which have been sharing in open development from very early on, dropping new capabilities weekly. While it is clear to assess the velocity with which Snaptrude, Arcol and Qonic deliver, I think it’s going to be hard to measure Motif’s modeller technology until it’s considerably along in the development phase. It’s a different approach. It doesn’t mean it’s wrong and with regular workshops and collaboration with the signature architects, there should be some comfort for investors that progress is being made. But, as Hanspal explained, it’s going to be a slow drip of capability.

While Autodesk may have been inquisitive about the new BIM start-ups, I suspect the ex-Autodesk talent in Motif, carrying out a similar Quantum plan, would be seen as a competitor that might do some damage if given space, time and resources. Motif is certainly well funded but with a US-based dev team, it will have a high cash burn rate.

By the same measurement, Snaptrude is way ahead, has a larger, purely Indian development team, with substantially lower costs and lower capital burn rate. Arcol has backing from Tooey Courtemanche (aka Mr. Procore) and Qonic is doing fast things with big datasets that just look like magic and have been totally self-funded. BIM 2.0 already has quality and depth. The challenge is to offer enough benefit, at the right price, to make customers want to switch, for which there is a minimal viable product.

It’s only February and we already know that this will be the year that BIM 2.0 gets real. All the key players and interested parties will all be at our NXT BLD and NXT DEV conferences in London on 11-12 June 2025 – that’s Arcol, Autodesk, Bentley Systems, Dassault Systèmes, Graphisoft, Snaptrude, Qonic and others. As these products are being developed, we need as many AEC firms onboard to helping guide their direction. We need to ensure the next generation of tools are what is needed, not what software programmers think we need, or limited to concepts which constrained workflows in the past. Welcome Motif to the melee for the hearts and minds of next generation users!

The post Motif to take on Revit: exclusive interview appeared first on AEC Magazine.

]]>
https://aecmag.com/bim/motif-to-take-on-revit-exclusive-interview/feed/ 0
AI delivers 3D BIM model from 2D sketch https://aecmag.com/ai/higharc-ai-delivers-3d-bim-model-from-2d-sketch/ https://aecmag.com/ai/higharc-ai-delivers-3d-bim-model-from-2d-sketch/#disqus_thread Thu, 13 Feb 2025 13:22:25 +0000 https://aecmag.com/?p=23045 Higharc, a cloud-based design solution for US timber frame housing, has just demonstrated impressive new AI capabilities

The post AI delivers 3D BIM model from 2D sketch appeared first on AEC Magazine.

]]>
In the emerging world of BIM 2.0, there will be generic new BIM tools and expert systems, dedicated to certain building types. Higharc is a cloud-based design solution for US timber frame housing. The company just demonstrated impressive new AI capabilities.

While AI is in full hype cycle and not a day passes without some grandiose AI claim, there are some press releases that raise the wizened eyebrows at AEC Magazine HQ.

North Carolina-based start-up, Higharc, has demonstrated a new AI capability which can automatically convert 2D hand sketches to 3D BIM models within its dedicated housing design system. This type of capability is something that several generic BIM developers are currently exploring in R&D.

Higharc AI, currently in beta, uses visual intelligence to auto-detect room boundaries and wall types by analysing architectural features sketched in plan. In a matter of minutes, the software then creates a correlated model comprising all the essential 3D elements that were identified in the drawing – doors, windows, and fixtures.

Everything is fully integrated with Higharc’s existing auto-drafting, estimating, and sales tools, so that construction documents, take-offs, and marketing collateral can be automatically generated once the design work is complete.

In one of the demonstrations we have seen, a 2D sketch of a second floor is imported, analysed and then automatically generates all the sketched rooms and doors, with interior and exterior walls and windows. The AI generated layout even means the roof design adapts accordingly. Higharc AI is now available via a beta program to select customers.

Marc Minor, CEO and co-founder of Higharc explains the driving force behind Higharc AI. “Every year, designers across the US waste weeks or months in decades-old CAD software just to get to a usable 3D model for a home,” he says.

“Higharc AI changes that. For the first time, generative AI has been successfully applied to BIM, eliminating the gap between hand sketches and web-based 3D models. We’re working to radically accelerate the home design process so that better homes can be built more affordably.”

AI demo

In the short video provided by Higharc, as seen below, we can see a hand drawn sketch imported into the Autolayout tool. The sketch is a plan view of a second floor, with bedrooms, bathrooms and stairs with walls, doors and windows indicated. There are some rough area dimensions and handwritten notes, denoting room allocation type.  The image is then analysed. The result is an opaque overlay, with each room (space) tagged appropriately, and a confirmation of how many rooms it found. There are settings for rectangle tolerance, minimum room areas. The next phase is to generate the rooms from this space plan.

We now switch to Higharc’s real-time rendered, modelling and drawing environment, where each room is inserted on the second floor of an existing single floor residential BIM model, where walls, windows, doors and stairs are added and materials are applied. This is simultaneously referencing an image of the sketch. The accurate BIM model has been created, combining traditional modelled with AI sketch-to-BIM generation.



What is Higharc?

Founded in 2018, Higharc develops a tailored cloud-based BIM platform, specifically designed to automate and integrate the US housing market, streamlining the whole process of design, sales, and constructing new homes.

Higharc is a service sold to home builders, that provides a tailored solution which integrates 3D parametric modelling, the auto creation of drawings, 3D visualisations, material quantities and costing estimates, related construction documents and planning permit application. AEC Magazine looked at the development back in 2022.

The company’s founders, some of which were ex-Autodesk employees, recognised that there needed to be new cloud-based BIM tools and felt the US housing market offered a greenfield opportunity, as most of the developers and construction firms in this space had completely avoided the BIM revolution, and were still tied to CAD and 2D processes. With this new concept, Higharc offered construction firms easy to learn design tools, which even prospective house buyers could use to design their dream homes. As the Higharc software models every plank and timber frame, accurate quantities can be connected to ERP systems for immediate and detailed pricing for every modification to the design.

The company claims its technology enhances efficiency, accelerating a builder’s time to market by two to three times, reducing the timeline for designing and launching new plots by 75% (approximately 90 days). Higharc also claims that plan designs and updates are carried out 100 times faster than with traditional 2D CAD software.

To date, Higharc has raised $80 million and has attracted significant investment and support from firms such as Home Depot Ventures, Standard Investments, and former Autodesk CEO Carl Bass. The company has managed to gain traction in the US market and is being used to build over 40,000 homes annually, representing $19 billion in new home sales volume.

While the company’s first go to market was established house building firms, it has used money raised to expand its reach to address those who want to design and build their own homes. The investment by Home Depot would also indicate that the system will integrate with the popular local building merchants, so self-builders can get access to more generic material supply information.  The company also plans to extend the building types it can design, eventually adding retail and office to its residential origins.

The post AI delivers 3D BIM model from 2D sketch appeared first on AEC Magazine.

]]>
https://aecmag.com/ai/higharc-ai-delivers-3d-bim-model-from-2d-sketch/feed/ 0
Hypar 2.0 – putting the spotlight on space planning https://aecmag.com/bim/hypar-2-0/ https://aecmag.com/bim/hypar-2-0/#disqus_thread Wed, 12 Feb 2025 07:59:25 +0000 https://aecmag.com/?p=22427 Hypar co-founder Ian Keough gives us the inside track as his cloud-based design tool puts the spotlight on space planning

The post Hypar 2.0 – putting the spotlight on space planning appeared first on AEC Magazine.

]]>
Towards the end of 2024, software developer Hypar released a whole new take on its cloud-based design tool, focused on space planning and with a cool new web interface. Martyn Day spoke with Hypar co-founder Ian Keough to get the inside track on this apparent pivot

Founded in 2018 by Anthony Hauck and Ian Keough, Hypar has certainly been on a journey in terms of its public-facing aims and capabilities.

Both co-founders are well-established figures in the software field. Hauck previously led Revit’s product development and pioneered Autodesk’s generative design initiatives. Keough, meanwhile, is widely recognised as the creator of Dynamo, a visual programming platform for Revit.

Initially, their creation Hypar looked very much like a single, large sandpit for generative designers familiar with scripting, enabling them to create system-level design applications, as well as for nonprogrammers looking to rapidly generate layouts, duct routing and design variations and get feedback on key metrics, which could then be exported to Revit.


Find this article plus many more in the Jan / Feb 2025 Edition of AEC Magazine
👉 Subscribe FREE here 👈

Back in 2023, we were blown away with Hypar’s integration of ChatGPT at the front end. This aimed to give users the ability to rapidly generate conceptual buildings and then progress on to fabrication-level models. This capability was subsequently demonstrated in tandem with DPR Construction.

One year later and the company’s front end has changed yet again. With a whole new interface and a range of capabilities specifically focused on space planning and layout, it feels as if Hypar has made a big pivot. What was once the realm of scripters now looks very much like a cloud planning tool that could be used by anyone.

AEC Magazine’s Martyn Day caught up with the always insightful Ian Keough to discuss Hypar’s development and better understand what seems like a change in direction at the company, as well as to get his more general views on AEC development trends.


Martyn Day: Developers such as Arcol, Snaptrude and Qonic are all aiming firmly at Revit, albeit coming at the market from different directions and picking their own entry points in the workflow to add value, while supporting RVT. Since Revit is so broad, it seems clear that it will take years before any of these newer products are feature-comparable with Revit, and all these companies have different takes on how to get there. With that in mind, how do you define a nextgeneration design tool and what is Hypar’s strategy in this regard?

Ian Keough: At Hypar, we’ve been thinking about this problem for five or six years from a fundamentally different place. Our very first pitch deck for Hypar showed images from work done in the 1960s at MIT, when they were starting to imagine what computers would be used for in design. They weren’t imagining that computers would be used for drafting, of course. Ivan Sutherland had already done that years before and we have all seen those images.

I think there are a lot of people who have very uninteresting ideas around AI in architecture, and those involve things like using AI to generate renderings and stuff like that. It’s nifty to look at, but it’s so low value in terms of the larger story of what all this computing power could do for us – Ian Keough

What they were imagining is that computers would be used to design buildings, and they were making punch card programmes to lay out hospitals and stuff and that. To me, that’s a very pro-future kind of vision. It imagined that computing capacity would grow to a point where the computer would become a partner in the process of design, as opposed to a slightly better version of the drafting board.

However, when it eventually happened, AutoCAD was released in the 1980s and instead we took the other fork of history. The result of taking that other fork has been interesting. If you look at this from a historic perspective, computers did what they did and they got massively more powerful over the years. But the small layer on top of that was all of our CAD software, which used very little of that available computing power. In a real sense, it used the local CPU, but not the computing power of all the data centres around the world which have come online. We were not leveraging that compute power to help us design more efficiently, more quickly, more correctly. We were just complaining that we couldn’t visualise giant models, and that’s still a thing that people talk about.


Hypar 2.0


Hypar 2.0

That’s still a big problem for people’s workloads. I don’t want to dismiss it. If you’re building an airport, you have got to load it, federate all of these models and be able to visualise it. I get that problem. But the larger problem is that, i n order to get to that giant model that you’re complaining about, there are many, many years of labour, of people building in sticks-and-bricks models. How many airports have we designed in the history of human civilisation?

So, thinking about the fork we face – and I think we’re experiencing a ‘come to Jesus’ moment here – people are now seeing AI. As a result, they’re getting equal parts hopeful that it will suddenly, at a snap of the fingers, remove all the toil that they’re experiencing in building these bigger and bigger and more complicated models, and equal parts afraid that it will embody all the expertise that is in their heads, and will leave them out of a job!


Martyn Day: I can envisage a time where AI can design a building in detail, but I can’t see it happening in our lifetime. What are your thoughts?

Ian Keough: I don’t think that’s the goal. I don’t think that’s the goal of anybody out there – even the people who I think have the most interesting and compelling ideas around AI and architecture. But I do think there are a lot of people who have very uninteresting ideas around AI in architecture, and those involve things like using AI to generate renderings and stuff like that. It’s nifty to look at, but it’s so low value in terms of the larger story of what all this computing power could do for us.

At AEC Magazine, you’ve already written about experiments that we’ve conducted in terms of designing through our chat prompt/text-to-BIM capability. So, we took the summation of the five years of work that we have done on Hypar as a platform, the compute infrastructure and, when LLMs came along, Andrew Heumann on our team suggested it would be cool if we could see if we could map human natural language down into input parameters for our generative system.

We did that. We put it out there. And everybody got really, really excited. But we quickly realised the limitations of that system. It’s very, very hard to design anything real through a check prompt. It’s one thing to generate an image of a building. It’s another thing to generate a building. You’ll see in the history of Hypar that the creation of this new version of the product directly follows the ‘text-to-BIM thing’, because what the ‘text-to-BIM thing’ showed us is that we have this very powerful platform.


Hypar 2.0

The new Hypar 2.0, which was released in September 2024, and more specifically, the layout suggestions capability, was our first nod towards AI-infused capabilities. The platform is all about seeing if we can make a design tool that’s a design tool first and foremost.

The problem with AI-generated rendering is you get what you get, and you can’t really change it, except for changing that prompt, and you’re totally out of control. What designers want is control. They want to be able to move quickly and to be able to control the design and understand the input parameters design. Hypar 2.0 is really about that. It’s about how you create a design tool and then lift all of this compute and seamlessly integrate it with the design experience, so that computation is not some other experience on top of your model.


Martyn Day: Historically, we have been used to seeing Hypar perform rapid conceptual modelling through scripting, generate building systems and be capable of multiple levels of detail to quickly model and then swap out to scale fidelity. The whole Hypar experience, looking at the website now, seems to be about space planning. Would you agree?

Ian Keough: That’s the head-scratcher for a lot of people when it comes to this new version. People who have seen me present on the work we did with DPR and other firms to make these incredibly detailed and sophisticated building systems are saying, “Wait, now you’re a space planning software now?”

That may seem like a little bit of a left turn. But the mission continues to enable anyone to build really richly detailed models from simple primitives without extra effort. We do this in the same way that we could take a low-resolution Revit wall and turn it into a fully clad DPR drywall layout, including all the fabrication instructions and the robotic layout instructions that go on the floor, and everything else. That capability still lives in Hypar, underneath the new interface.

What we are doing is getting back to software that solves real problems, again. This is a very gross simplification of what’s going on, but what problem does Revit actually solve? The answer is drawings, documentation. That’s the problem that Revit solves today and has solved since the beginning. What it does not solve is the problem of how to turn an Excel spreadsheet that represents a financial model into the plan for a hospital. It does not solve that at all. That is solved by human labour and human intellect. And right now, it’s solved in a very haphazard way, because the software doesn’t help you. It doesn’t offer you any affordances to help you do that. Everybody is largely either doing this as cockamamie-crazy, nested-family Lego blocks and jelly cubes in Revit, or trying to do it as just a bunch of coloured polygons in Bluebeam. That’s not how we’re utilising compute.

At the end of a design tool, it is still the architect’s experience and intellect that creates a building. What the design tool should do is remove all of the toil.

To give you an example of this, now that we’ve reached a point where users can use our software in a certain production context, to create these larger space plans, they’re starting to ask for the next layer of capabilities such as clearances as a semantic concept. This is the idea that, if I’m sitting at this desk, there should be a clearance in front of this desk, so that people have enough room to walk by. Sometimes, clearances are driven by code – so why has no piece of architectural design software in the last 20 years had a semantic notion of a clearance that you could either set specifically or derive from code? You might be able to write a checker in Solibri in the postdesign phase, but what about the designer at the point of creating the model?

Clearances are just one example. There are plenty of others, but the other impetus for a lot of what we’re doing right now is the fact that organisations like HOK have a vast storehouse of encoded design knowledge, in the form of all of the work that they’ve done in the past. Often, they cannot reuse this knowledge, except by way of hiring architects and transmitting this expertise from one person to the next, in a form that we have used for thousands of years – by storytelling, right?

What firms want is a way to capture that knowledge in the form of spaces, specific spaces, and all the stuff that’s in a space and the reasons for that stuff being there. And then they just want to transfer that knowledge from one project to another, whether it’s a healthcare project or any other kind of project that they’ve carried out before.

At the beginning of defining the next version of Hypar, when we started talking with architects about this problem, I was amazed by the cleverness of the architects. They’re actually finding solutions to do this with the software they have now. They build these giant, elaborate Revit models with hundreds of standard room types in them, and then they have people open those Revit models and copy and paste out stuff from the library.

I had one guy who referred to his model as ‘the Dewey Decimal System’. He had grids in Revit numbered in the Dewey Decimal System manner, such that he could insert new standards into this crazy grid system. And he referred to them by their grid locations.

In other words, architects have overcome the limitations that we’ve put in place in terms of software. But why isn’t it possible in Revit to select a room and save it as a standard, so the next time I put a room tag in that set exam room, such as a paediatric exam room, it just infills it with what I’ve done for the last ten projects.

To get back to your question about what the next generation looks like, I guess the simplest way to explain how we’re approaching it is that we’re picking a problem to solve that’s at the heart of designing buildings. It’s at the moment of creation, literally, of a building. We want to solve that problem and use software as a way to accelerate the designer, rather than a way to demonstrate that we can visualise larger models. That will come in time, but really, we want to use this vast computational resource that we have to undergird this sort of design, and make a great, snappy, fun design tool.


Martyn Day: Old BIM systems are oneway streets. They are about building a detailed model to produce drawings. But you have gone on record talking about tasks that need different levels of abstraction and multiple levels of scale, depending on the task. Can you explain how this functions in Hypar?

Ian Keough: You’ll notice in the new version of Hypar that there’s something called ‘bubble mode’. It’s a diagram mode for drawing spaces, but you’re drawing them in this kind of diagrammatic, ‘bubbly’ way.

That was an insight that we gleaned from spending literally hundreds of hours watching architects at the very early stage of designing buildings. They would use that way of communicating when they were doing departmental layout or whatever. They were hacking tools like Miro and other things, where they were having these conversations to do this stuff. But it was never at scale.

We were already thinking of this idea of being able to move them from lowlevel detail to a high level of detail without extra effort by means of leveraging compute. Now, in Hypar, and I’ll admit the bits are not totally connected yet in this idea, you’ll notice that people will start planning in this bubble mode, and then they’ll have conversations around bubble mode, at that level of detail.

Meanwhile, the software is already working behind the scenes, creating a network of rooms for them. And then they’ll perform the next step and use this clever stuff to intelligently lay out those rooms, the contents in the rooms. The next level of detail passed that will be connectors to other building systems, so let’s generate the building system. There’s this continuous thread that follows levels of detail from diagram to space – to spaces with equipment and furniture and to building systems.


Martyn Day: We have seen Hypar focus on conceptual work, space planning, fabrication-level modelling. Is the goal here to try and tackle every design phase?

Ian Keough: We’re marching there. The great thing about this is that there’s already value in what we offer. This is something that I think start-ups need to think about. You’re solving a problem, and if you want to make any money at all, that problem needs to have value at every point along the trajectory. That’s unless you raise a ton of capital, and say, ‘Ten years from now, we’ll have something that does everything.’

The reality is at day five, after you’ve built some software, and you put it in customers’ hands, that thing has to have value for them. The good news is that just in the way that we design buildings now, from low-level detail to high-level detail, there’s value in all those places along the design journey.

Why isn’t it possible in Revit to select a room and save it as a standard, so the next time I put a room tag in that set exam room, such as a paediatric exam room, it just infills it with what I’ve done for the last ten projects

The other thing that I think is going to happen, to achieve what we’ve been envisioning since the beginning of Hypar, is fully generated buildings. I do not believe in the idea that there’s this zerosum game that we’re all playing, where somebody’s going to build the one thing that ‘owns the universe’.

This is a popular construct in people’s minds, because they love this notion of somebody coming along and slaying the dragon of Revit in some way, and replacing it with another dragon.

What’s going to happen is, in the same way that we see with massively connected systems of apps on your phone and on the internet, these things are going to talk to each other. It’s quite possible that the API of the future for generating electrical systems is going to be owned by a developer like Augmenta (www.augmenta.ai). And since we’re allowing people to layout space in a very agile way, Hypar plugs into that and asks the user, ‘Would you like this app to asynchronously generate a system for you?’

Now, it might be that, over Hypar’s lifetime, there will be real value in us building those things as well, because most of the work that we’re doing right now is really about the tactility of the experience. So it might be that, to achieve the experience that we want, we have to be the ones who own the generation of those systems as well, but I can’t say yet whether or not that’s the case.

Everything we’re doing right now in terms of the new application is around just building that design experience. What we do in the next six months to one year, vis-à-vis how we connect back into functions that are on the platform and start to expose that capability, I can’t speculate right now.

What we need to do is land this thing in the market and then get enough people interested in using it, so that it starts to take hold. Some of the challenge in doing that is what you alluded to earlier, which is that people are trying to pigeon-hole you. They’ll ask, ‘Are you trying to kill Revit?’, or, ‘Are you trying to kill this part of the process that I currently do in Revit?’ That’s a challenge for all start-ups.

The decision that we made to rebuild the UI is about the long-term vision we have for Hypar. That vision has always been to put the world’s building expertise in the hands of everyone, everywhere. And if you think about that longterm vision, everybody will have access to the world’s building expertise. But how do they access it? If it’s through an interface that only the Dynamo and Grasshopper script kids can use or want to use, then we will not have fulfilled our vision.

The post Hypar 2.0 – putting the spotlight on space planning appeared first on AEC Magazine.

]]>
https://aecmag.com/bim/hypar-2-0/feed/ 0
Artificial horizons: AI in AEC https://aecmag.com/ai/artificial-horizons-ai-in-aec/ https://aecmag.com/ai/artificial-horizons-ai-in-aec/#disqus_thread Wed, 12 Feb 2025 07:56:07 +0000 https://aecmag.com/?p=22407 We ask Greg Schleusner, director of design technology at HOK for his thoughts on the AI opportunity

The post Artificial horizons: AI in AEC appeared first on AEC Magazine.

]]>
In AEC, AI rendering tools have already impressed, but AI model creation has not – so far. Martyn Day spoke with Greg Schleusner, director of design technology at HOK, to get his thoughts on the AI opportunity

One can’t help but be impressed by the current capabilities of many AI tools. Standout examples include Gemini from Google, ChatGPT from OpenAI, Musk’s Grok, Meta AI and now the new Chinese wunderkind, DeepSeek.

Many billions of dollars are being invested in hardware. Development teams around the globe are racing to create an artificial general intelligence, or AGI, to rival (and perhaps someday, surpass) human intelligence.

In the AEC sector, R&D teams within all of the major software vendors are hard at work on identifying uses for AI in this industry. And we’re seeing the emergence of start-ups claiming AI capabilities and hoping to beat the incumbents at their own game.

However, beyond the integration of ChatGPT frontends, or yet another AI renderer, we have yet to feel the promised power of AI in our everyday BIM tools.

The rendering race

The first and most notable application area for AI in the field of AEC has been rendering, with the likes of Midjourney, Stable Diffusion, Dall-E, Adobe Firefly and Sketch2Render all capturing the imaginations of architects.

While the price of admission has been low, challenges have included the need to specify words to describe an image (there is, it seems, a whole art to writing prompting strategies) and then somehow remain in control of its AI generation through subsequent iterations.


Greg Schleusner speaking at AEC Magazine’s NXT BLD conference

In this area, we’ve seen the use of LoRAs (Low Rank Adaptations), which implement trained concepts/styles and can ‘adapt’ to a base Stable Diffusion model, and ControlNet, which empowers precise and structural control to deliver impressive results in the right hands.

For those wishing to dig further, we recommend familiarising yourself with the amazing work of Ismail Seleit and his custom-trained LoRAs combined with ControlNet. For those who’d prefer not to dive so deep into the tech, SketchUp Diffusion, Veras, and AI Visualizer (for Archicad, Allplan and Vectorworks), have helped make AI rendering more consistent and likely to lead to repeatable results for the masses.

However, when it comes to AI ideation, at some point, architects would like to bring this into 3D – and there is no obvious way to do this. This work requires real skill, interpreting a 2D image into a Rhino model or Grasshopper script, as demonstrated by the work of Tim Fu at Studio Tim Fu.

It’s possible that AI could be used to auto-generate a 3D mesh from an AI conceptual image, but this remains a challenge, given the nature of AI image generation. There are some tools out there which are making some progress, by analysing the image to extract depth and spatial information, but the resultant mesh tends to come out as one lump, or as a bunch of meshes, incoherent for use as a BIM model or for downstream use.


Back in 2022, we tried taking 2D photos and AI-generated renderings from Hassan Ragab into 3D using an application called Kaedim. But the results were pretty unusable, not least because at that time Kaedim had not been trained on architectural models and was more aimed at the games sector.

Of course, if you have multiple 2D images of a building, it is possible to recreate a model using photogrammetry and depth mapping.

AI in AEC – text to 3D

It’s possible that the idea of auto-generating models from 2D conceptual AI output will remain a dream. That said, there are now many applications coming online that aim to provide the AI generation of 3D models from text-based input.

The idea here is that you simply describe in words the 3D model you want to create – a chair, a vase, a car – and AI will do the rest. AI algorithms are currently being trained on vast datasets of 3D models, 2D images and material libraries.

While 3D geometry has mainly been expressed through meshes, there have been innovations in modelling geometry with the development of Neural Radiance Fields (NeRFs) and Gaussian splats, which represent colour and light at any point in space, enabling the creation of photorealistic 3D models with greater detail and accuracy.

Today, we are seeing a high number of firms bringing ‘text to 3D’ solutions to market. Adobe Substance 3D Modeler has a plug-in for Photoshop that can perform text-to-3D. Similarly, Autodesk demonstrated similar technology — Project Bernini — at Autodesk University 2024.

However, the AI-generated output of these tools seems to be fairly basic — usually symmetrical objects and more aimed towards creating content for games.

In fact, the bias towards games content generation can be seen in many offerings. These include Tripo, Kaedim, Google DreamFusion  and Luma AI Genie.

There are also several open source alternatives. These include Hunyuan3D-1, Nvidia’s Magic 3D and Edify.

AI in AEC – the Schleusner viewpoint

When AEC Magazine spoke to Greg Schleusner of HOK on the subject of text-to-3D, he highlighted D5 Render, which is now an incredibly popular rendering tool in many AEC firms.

The application comes with an array of AI tools, to create materials, texture maps and atmosphere match from images. It supports AI scaling and has incorporated Meshy’s text-to-AI generator for creating content in-scene.

That means architects could add in simple content, such as chairs, desks, sofas and so on — via simple text input during the arch viz process. The items can be placed in-scene on surfaces with intelligent precision and are easily edited. It’s content on demand, as long as you can describe that content well in text form.


Text-to-3D technology
Text-to-3D technology from Autodesk – Project Bernini

Schleusner said that, from his experimentation, text-to-image or image-tovideo tools are getting better, and will eventually be quite useful — but that can be scary for people working in architecture firms. As an example, he suggested that someone could show a rendering of a chair within a scene, generated via text to AI. But it’s not a real chair, and it can’t be purchased, which might be problematic when it comes to work that will be shown to clients. So, while there is certainly potential in these types of generative tools, mixing fantasy with reality in this way doesn’t come problem-free.

It may be possible to mix the various model generation technologies. As Schleusner put it: “What I’d really like to be able to do is to scan or build a photogrammetric interior using a 360-degree camera for a client and then selectively replace and augment the proposed new interior with new content, perhaps AI-created.”

Gaussian splat technology is getting good enough for this, he continued, while SLAM laser scan data is never dense enough. “However, I can’t put a Gaussian splat model inside Revit. In fact, none of the common design tools support that emerging reality capture technology, beyond scanning. In truth, they barely support meshes well.”


AI in AEC – LLMs and AI agents

At the time of writing, DeepSeek has suddenly appeared like a meteor, seemingly out of nowhere, intent on ruining the business models of ChatGPT, Gemini and other providers of paid-for AI tools.

Schleusner was early into DeepSeek and has experimented with its script and code-writing capabilities, which he described as very impressive.

LLMs, like ChatGPT, can generate Python scripts to perform tasks in minutes, such as creating sample data, training machine learning models, and writing code to interact with 3D data.

Schleusner is finding that AI-generated code can accomplish these tasks relatively quickly and simply, without needing to write all the code from scratch himself.

“While the initial AI-generated code may not be perfect,” he explained, “the ability to further refine and customise the code is still valuable. DeepSeek is able to generate code that performs well, even on large or complex tasks.”

WIth AI, much of the expectation of customers centres on the addition of these new capabilities to existing design products. For instance, in the case of Forma, Autodesk claims the product uses machine learning for real-time analysis of sunlight, daylight, wind and microclimate.

However, if you listen to AI-proactive firms such as Microsoft, executives talk a lot about ‘AI agents’ and ‘operators’, built to assist firms and perform intelligent tasks on their behalf.

Microsoft CEO Satya Nadella is quoted as saying, “Humans and swarms of AI agents will be the next frontier.” Another of his big statements is that, “AI will replace all software and will end software as a service.” If true, this promises to turn the entire software industry on its head.

Today’s software as a service, or SaaS, systems are proprietary databases/silos with hard-coded business logic. In an AI agent world, these boundaries would no longer exist. Instead, firms will run a multitude of agents, all performing business tasks and gathering data from any company database, files, email or website. In effect, if it’s connected, an AI agent can access it.

At the moment, to access certain formatted data, you have to open a specific application and maybe have deep knowledge to perform a range of tasks. An AI agent might transcend these limitations to get the information it needs to make decisions, taking action and achieving business-specific goals.

AI agents could analyse vast amounts of data, such as a building designs, to predict structural integrity, immediately flag up if a BIM component causes a clash, and perhaps eventually generate architectural concepts. They might also be able to streamline project management by automating routine tasks and providing real-time insights for decision-making.

AI agents could analyse vast amounts of data, such as a building designs, to predict structural integrity, immediately flag up if a BIM component causes a clash, and perhaps eventually generate architectural concepts

The main problem is going to be data privacy, as AI agents require access to sensitive information in order to function effectively. Additionally, the transparency of AI decision-making processes remains a critical issue, particularly in high-stakes AEC projects where safety, compliance and accuracy are paramount.

On the subject of AI agents, Schleusner said he has a very positive view of the potential for their application in architecture, especially in the automation of repetitive tasks. During our chat, he demonstrated how a simple AI agent might automate the process of generating something as simple as an expense report, extracting relevant information, both handwritten and printed from receipts.

He has also experimented by creating an AI agent for performing clash detection on two datasets, which contained only XYZ positions of object vertices. Without creating a model, the agent was able to identify if the objects were clashing or not. The files were never opened. This process could be running constantly in the background, as teams submitted components to a BIM model. AI agents could be a game-changer when it comes to simplifying data manipulation and automating repetitive tasks.

Another area where Schleusner feels that AI agents could be impactful is in the creation of customisable workflows, allowing practitioners to define the specific functions and data interactions they need in their business, rather than being limited by pre-built software interfaces and limited configuration workflows.

Most of today’s design and analysis tools have built-in limitations. Schleusner believes that AI agents could offer a more programmatic way to interact with data and automate key processes. As he explained, “There’s a big opportunity to orchestrate specialised agents which could work together, for example, with one agent generating building layouts and another checking for clashes. In our proprietary world with restrictive APIs, AI agents can have direct access and bypass the limits on getting at our data sources.”


Stable Diffusion
Stable Diffusion image courtesy of James Gray

Conclusion

For the foreseeable future, AEC professionals can rest assured that AI, in its current state, is not going to totally replace any key roles — but it will make firms more productive.

The potential for AI to automate design, modelling and documentation is currently overstated, but as the technology matures, it will become a solid assistant. And yes, at some point years hence, AI with hard-coded knowledge will be able to automate some new aspects of design, but I think many of us will be retired before that happens. However, there are benefits to be had now and firms should be experimenting with AI tools.

We are so used to the concept of programmes and applications that it’s kind of hard to digest the notion of AI agents and their impact. Those familiar with scripting are probably also constrained by the notion that the script runs in a single environment.

By contrast, AI agents work like ghosts, moving around connected business systems to gather, analyse, report, collaborate, prioritise, problem-solve and act continuously. The base level is a co-pilot that may work alongside a human performing tasks, all the way up to fully autonomous operation, uncovering data insights from complex systems that humans would have difficulty in identifying.

If the data security issues can be dealt with, firms may well end up with many strategic business AI agents running and performing small and large tasks, taking a lot of the donkey work from extracting value from company data, be that an Excel spreadsheet or a BIM model.

AI agents will be key IP tools for companies and will need management and monitoring. The first hurdle to overcome is realising that the nature of software, applications and data is going to change radically and in the not-too-distant future.


Main image: Stable Diffusion architectural images courtesy of James Gray. Image (left) generated with ModelMakerXL, a custom trained LoRA by Ismail Seleit. Follow Gray on LinkedIn

The post Artificial horizons: AI in AEC appeared first on AEC Magazine.

]]>
https://aecmag.com/ai/artificial-horizons-ai-in-aec/feed/ 0
Bentley Systems appoints new COO https://aecmag.com/business/bentley-systems-appoints-new-coo/ https://aecmag.com/business/bentley-systems-appoints-new-coo/#disqus_thread Tue, 14 Jan 2025 10:03:23 +0000 https://aecmag.com/?p=22377 James Lee transitions from Google, where he oversaw startups and AI operations

The post Bentley Systems appoints new COO appeared first on AEC Magazine.

]]>
James Lee transitions from Google, where he oversaw startups and AI operations

Bentley Systems has announced the selection of James Lee as chief operating officer. Lee transitions from Google, where he held the position of General Manager overseeing startups and artificial intelligence operations at Google Cloud.

Before his tenure at Google, Lee dedicated 12 years at SAP, serving in roles including chief operating officer for SAP Ariba and Fieldglass, alongside chief operating officer and general manager of sales for SAP Greater China.

Lee will enhance Bentley’s cross-functional coordination across planning and implementation, will champion operations, and will supervise operations in China, Japan, and portfolio advancement including growth ventures such as Bentley Asset Analytics.

Bentley System’s CEO Nicholas Cumins remarked, ““I am excited to welcome James, a world-class operational leader, to Bentley. His energy and experience managing operations and investment initiatives at SAP and Google will be instrumental to Bentley as we continue to scale up and drive our ambitious growth agenda.”

To boost innovation and strengthen alignment between product execution and technology strategy, Bentley has also declared that product development responsibilities have been unified under chief technology officer Julien Moutte. Consequently, the chief product officer position has been removed, and through mutual understanding Mike Campbell will depart the organisation.

Cumins explained, “Streamlining our organisational reporting structure and consolidating product development under Julien puts us in a stronger position to capture the many growth opportunities that we have opened up with infrastructure AI and that are incremental to our core business and consistent momentum. Without a doubt, AI is our generation’s paradigm shift and has huge potential for improving infrastructure delivery and performance.”


Caption: James Lee, Bentley Systems COO

The post Bentley Systems appoints new COO appeared first on AEC Magazine.

]]>
https://aecmag.com/business/bentley-systems-appoints-new-coo/feed/ 0