AI Archives - AEC Magazine https://aecmag.com/ai/ Technology for the product lifecycle Wed, 16 Apr 2025 15:02:43 +0000 en-GB hourly 1 https://wordpress.org/?v=6.6.2 https://aecmag.com/wp-content/uploads/2021/02/cropped-aec-favicon-32x32.png AI Archives - AEC Magazine https://aecmag.com/ai/ 32 32 Twinmotion now supports Nvidia DLSS 4 https://aecmag.com/visualisation/twinmotion-now-supports-nvidia-dlss-4/ https://aecmag.com/visualisation/twinmotion-now-supports-nvidia-dlss-4/#disqus_thread Wed, 16 Apr 2025 15:01:45 +0000 https://aecmag.com/?p=23647 Neural rendering technology can deliver close to a 4x boost in frame rates

The post Twinmotion now supports Nvidia DLSS 4 appeared first on AEC Magazine.

]]>
Neural rendering technology can deliver close to a 4x boost in frame rates

Twinmotion 2025.1.1, the latest release of the real time rendering software from Epic Games, supports Nvidia DLSS 4, a suite of neural rendering technologies that uses AI to boost 3D performance.

Epic Games shows that when DLSS 4 is enabled in Twinmotion it can render almost four times as many frames per second (FPS) than when DLSS is set to off.

DLSS 4 uses a technology called Multi Frame Generation, an evolution of Single Frame Generation, which was introduced in DLSS 3.

Single Frame Generation uses the AI Tensor cores on Nvidia GPUs to interpolate one synthetic frame between every two traditionally rendered frames, improving performance by reducing the number of frames that need to be rendered by the GPU.

Multi Frame Generation extends this approach by using AI to generate up to three frames between each pair of rendered frames, further increasing frame rates. The technology is only available on Nvidia’s new Blackwell-based RTX GPUs, which have been architected specifically to better support neural rendering.

Multi Frame Generation can be used alongside Super Resolution, where AI upscales a lower-resolution frame to a higher resolution, and Ray Reconstruction, where AI is used to generate additional pixel data in ray-traced scenes. According to Nvidia, when all DLSS technologies are combined, 15 out of every 16 pixels in a frame can be generated by AI. This greatly reduces the computational demands of traditional rendering and significantly boosts overall performance.

Twinmotion 2025.1.1 includes several other features.

3D Grass material allows users to drag and drop five types of grass material onto any surface. The Configurations feature, first introduced in Twinmotion 2025.1 to allow users to build interactive 3D presentations that showcase different variations of a project, has also been enhanced. Users can now export configurators to Twinmotion Cloud, for easy sharing, and use a mesh as a trigger — for example clicking on a door handle to open a door.

The post Twinmotion now supports Nvidia DLSS 4 appeared first on AEC Magazine.

]]>
https://aecmag.com/visualisation/twinmotion-now-supports-nvidia-dlss-4/feed/ 0
Studio Tim Fu: AI-driven design https://aecmag.com/ai/studio-tim-fu-ai-driven-design/ https://aecmag.com/ai/studio-tim-fu-ai-driven-design/#disqus_thread Wed, 16 Apr 2025 05:00:05 +0000 https://aecmag.com/?p=23386 The London practice is reimagining architectural workflows, blending human creativity with machine intelligence

The post Studio Tim Fu: AI-driven design appeared first on AEC Magazine.

]]>
The pioneering London practice is reimagining architectural workflows through AI, blending human creativity with machine intelligence to accelerate and elevate design, writes Greg Corke

It’s rare to see an architectural practice align itself so openly with a specific technology. But Studio Tim Fu is breaking that mould. Built from the ground up as an AI-first practice, the London-based studio is unapologetically committed to exploring how generative AI can reshape architecture—from the earliest concepts to fully constructable designs.

“We want to explore in depth how we can use the technology of generative AI, of neural networks, deep learning, and large language models as well, in an effort to facilitate an accelerated way of designing and building, but also thinking,” explains founder Tim Fu.


Find this article plus many more in the March / April 2025 Edition of AEC Magazine
👉 Subscribe FREE here 👈

Studio Tim Fu’s current methodology uses AI early in the design process to boost creativity, accelerate visualisation, and improve client communication — all while maintaining technical feasibility.

The technological journey began during Fu’s time at Zaha Hadid Architects, where he explored the potential of computational design to rationalise complex geometries. “We were thinking about the complexity of design and how we can bring that to fruition through computational processes and technologies,” he recalls.

This early exploration laid the groundwork to the Studio’s current AI-driven approach, which involves a sophisticated iterative process that blends human intention with machine learning capabilities. Initial AI-generated concepts are refined through human guidance, then reinterpreted by diffusion AI technology. This creates a dynamic feedback loop for rapid conceptualisation, where hundreds of design expressions can be explored in a single day.

Once we figure out the architectural design and planning that solves real life situation and constraints and context, we bring those back into the AI visualising models, to visualise and continue to iterate over our existing 3D models

Fu’s technical approach employs a complex system of AI tools, from common text-to-image generators such as Midjourney, Dall-E and Stable Diffusion to custom-trained models. Using these tools at the start of a project presents a ‘gradient of possibilities’, says Fu, both using AI’s creative agency and incorporating human intentions. The team uses text prompts to spark fresh ideas, producing ‘mood boards’ of synthetic visuals, as well as hand sketches to guide the AI.

“We use a mesh of back and forth with different design tools,” he explains. Ideas are generated and refined before they are translated into 3D geometry using modelling tools like Rhino.

“Once we figure out the architectural design and planning that solves real life situation and constraints and context, we bring those back into the AI visualising models, to visualise and continue to iterate over our existing 3D models,” he says. This enables the design team to see, for example, different possible expressions of window details and geometries. It’s a continuous loop—a creative dialogue between human intention and machine imagination.

Fu believes the results speak for themselves: in just one week, his team can deliver high-quality, client-ready concepts that far exceed what’s possible using conventional methods within the same time frame.


Studio Tim Fu
Lake Bled Estate masterplan in Slovenia. Credit: Studio Tim Fu

Studio Tim Fu


This level of efficiency brings new economic opportunities. Studio Tim Fu can charge clients less than traditional architects while boosting its earnings, all within conventional pricing structures. “We can lower the price because we can, and we can up the value, so it’s a win for the client and it’s good for us,” he says.

AI meets heritage

The Studio’s work on the Lake Bled Estate masterplan in Slovenia, its first fully AI-driven architectural project, serves as a landmark demonstration of these technical capabilities.

Spanning an expansive 22,000 square metre site, the project comprises six ultra-luxury villas set alongside the historic Vila Epos, a protected cultural monument of the highest national significance.

To produce a design that respects its historical context while creating an elevated luxury space, Studio Tim Fu synthesises heritage data with AI.

The Studio captured the local architectural vernacular by analysing material characteristics and extracting geometric parameters to comply with strict heritage regulations, including roof layout, height, and slope.

“This is the first time we are showing AI in its most contextually reflective way,” says Fu, “Something that is contrary to all the AI experiments that have come out since the dawn of diffusion AI processes.

“We want to showcase that this whole diffusion process can be completely controlled under our belt and be used for specifically addressing those issues [of respecting historical context].”

Delivering the details

Studio Tim Fu currently applies AI primarily at the concept-to-detail design stage. However, Fu believes we’re at a pivotal moment where AI is poised to take on more technical aspects of architectural design—particularly in areas like BIM modelling and dataset management.

“Because these are technical requirements, technical needs, and technical goals, it’s something that can be quantified,” he explains. “If it’s maximising certain functionality, while minimising the use of material and budget, these are numerical data that can be optimised. We’re just beginning that process of developing artificial general intelligence.”

But where does this leave humans? While Fu acknowledges that we must humbly recognise our limitations, he believes that human specialists—architects, designers, and fabricators—will remain essential, each working with AI within their own domain. At the same time, he sees enormous potential for AI to unify these fields.

“What AI can do is bring all of the human processes into a cohesive, streamlined decision making, to design to production process, because that’s what AI is good at. It’s good at cohesing large data sets, it’s good at addressing macro scale and micro scale values in the same time.”


Main image: Lake Bled Estate masterplan in Slovenia. Credit: Studio Tim Fu

The post Studio Tim Fu: AI-driven design appeared first on AEC Magazine.

]]>
https://aecmag.com/ai/studio-tim-fu-ai-driven-design/feed/ 0
AI agents for civil engineers https://aecmag.com/civil-engineering/ai-agents-for-civil-engineers/ https://aecmag.com/civil-engineering/ai-agents-for-civil-engineers/#disqus_thread Wed, 16 Apr 2025 05:00:31 +0000 https://aecmag.com/?p=23487 How LLMs can help engineers work more efficiently, while still respecting professional responsibilities

The post AI agents for civil engineers appeared first on AEC Magazine.

]]>
Anande Bergman explores how AI agents can be used to create powerful solutions to help engineers work more efficiently but still respect their professional responsibilities

As a structural engineer, I’ve watched how AI is transforming various industries with excitement. But I’ve also noticed our field’s hesitation to adopt these technologies — and for good reason. We deal with safety-critical systems where reliability is a requirement.

In this article, I’ll show you how we can harness AI’s capabilities while maintaining the reliability we need as engineers. I’ll demonstrate this with an AI agent I created that can interpret truss drawings and run FEM analysis (code repository included), and I’ll give you resources to create your own agents.

The possibilities here have me truly excited about our profession’s future! I’ve been in this field for years, and I haven’t been this excited about a technology’s potential to transform how we work since I first discovered parametric modelling.


Find this article plus many more in the March / April 2025 Edition of AEC Magazine
👉 Subscribe FREE here 👈

What makes AI agents different?

Unlike traditional automation that follows fixed rules, AI agents can understand natural language, adapt to different situations, and even solve problems creatively. Think of them as smart assistants that can understand what you want and get it done.

For example, while a traditional Python script needs exact coordinates, boundary conditions, and forces to analyse a truss, an AI agent can look at a hand-drawn sketch or AutoCAD drawing and figure out the structure’s geometry by itself (see image below). It can even request any missing information needed for the analysis. This flexibility is powerful, but it also introduces unpredictability — something we engineers typically try to avoid.


Anande Bergman


The rise of specialised AI agents It’s 2025, and you’ve probably heard of ChatGPT, Claude, Llama, and other powerful Large Language Models (LLMs) that can do amazing things, like being incredibly useful coding assistants. However, running these large models in production is expensive, and their general-purpose nature sometimes makes them underperform in specific tasks.

This is where specialised agents come in. Instead of using one large model for everything, we can create smaller, fast, focused agents for specific tasks — like analysing drawings or checking building codes. These specialised agents are:

  • More cost-effective to run
  • Better at specific tasks
  • Easier to validate

Agents are becoming the next big thing. As Microsoft CEO Satya Nadella points out, “We’re entering an agent era where business logic will increasingly be handled by specialised AI agents that can work across multiple systems and data sources”.

For engineering firms, this means we can create agents that understand our specific workflows and seamlessly integrate with our existing tools and databases.

The engineering challenge

Here’s our core challenge: while AI offers amazing flexibility, engineering demands absolute reliability. When you’re designing a bridge or a building, you need to be certain about your calculations. You can’t tell your client “the AI was 90% sure this would work.”

On the other hand, creating a rule-based engineering automation tool that can handle all kinds of inputs and edge cases while maintaining 100% reliability is a significant challenge. But there’s a solution.

Bridging the gap: reliable AI agents

We can combine the best of both worlds by creating a system with three key components (see image below):


Anande Bergman


  1. AI agents handle the flexible parts – understanding requests, interpreting drawings, and searching for data.
  2. Validated engineering tools perform the critical calculations.
  3. Human in the loop: You, the engineer, maintain control — verifying data, checking results, and approving modifications.

Let me demonstrate this approach with a practical example I built: a truss analysis agent.

Engineering agent to analyse truss structures

Just as an example, I created a simple agent that calculates truss structures using the LLM Claude Sonnet. You give it an image of the truss, it extracts all the data it needs, runs the analysis, and gives you the results.

You can also ask the agent for any kind of information, like material and section properties, or to modify the truss geometry, loads, forces, etc. You can even give it some more challenging problems, like “Find the smallest IPE profile so the stresses are under 200 MPa”, and it does!

The first time I saw this working I couldn’t help but feel that childlike excitement engineers get when something cool actually works. Here is where you start seeing the power of AI agents in action.

It is capable of interpreting different types of drawings and creating a model, which saves a lot of time in comparison with the typical Python script where you would need to enter all the node coordinates by hand, define the elements and their properties, loads, etc.

Additionally, it solves problems using information I did not define in the code, like the section properties of IPE profiles or material properties of steel, or what is the process to choose the smallest beam to fulfil the stress requirement. It does everything by itself. N.B. You can find the source code of this agent in the resources section at the end.

In the video below, you can see the app I made using VIKTOR.AI


How does it work: an overview

Now let’s look behind the screen to understand how our AI agent works, so you can make one yourself.

In the image below you can see that in the centre you have the main AI agent, the brains of the operation. This is the agent that chats with the user and accepts text and images as input.


Anande Bergman


Additionally, it has a set of tools at its disposal, including another AI Agent, which it uses when it believes they are needed to complete the job:

  • Analyse Image: AI Agent specialised in interpreting images of truss structures and returning the data needed to build the FEM model.
  • Plot Truss: A simple Python function to display the truss structures.
  • FEM Analysis: Validated FEM analysis script programmed in Python.

The Main agent

The Main agent is powered by Claude 3.7 Sonnet, which is the latest LLM provided by Anthropic. Basically, you are using the same model you are chatting with when using Claude in the browser, but you use it in your code using their API, and you give the model clear guidelines on how to behave and provide it with a set of tools it can use to solve problems.

You can also use other models like ChatGPT, Llama 3.x, and more, as long as they support tool calling natively (using functions). Otherwise, it gets complicated to use your validated engineering scripts.

For example, here’s how we get an answer from Claude using Python (see image below).


Anande Bergman


Let’s break down these key components:

  • SYSTEM MESSAGE: This is a text that defines the agent’s role, behaviour guidelines, boundaries, etc.
  • TOOLS_DESCRIPTION: Description of what tools the agent can use, their input and output.
    messages: This is the complete conversation, including all previous user and assistant (Claude) messages, so Claude knows the context of the conversation.

Tools use

One of the most powerful features of Claude and other modern LLMs is their ability to use tools autonomously. When the agent needs to solve a problem, it can decide which tools to use and when to use them. All it needs is a description of the available tools, like in the image below.


Anande Bergman


The agent can’t directly access your computer or tools — it can only request to use them. You need a small intermediary function that listens to these requests, runs the appropriate tool, and sends the results back. So don’t worry, Claude won’t take over your laptop… yet 😉

The Analyse image agent

Here’s a fun fact: the agent that analyses truss images is actually another instance of Claude! So yes, we have Claude talking to Claude (shhh…. don’t tell him 🤫). I did this to show how agents can work together, and honestly, it was the simplest way to get the job done.

This second agent uses Claude’s ability to understand both images and text. I give it an image and ask it to return the truss data in a specific JSON format that we can use for FEM analysis. Here is the prompt I use.


Anande Bergman


I’m actually quite impressed by how well Claude can interpret truss drawings right out of the box. For complex trusses, though, it sometimes gets confused, as you can see in the test cases later.

This is where a specialised agent, trained specifically for analysing truss images, would make a difference. You could create this using machine learning or by fine-tuning an LLM. Fine-tuning means giving the model additional training on your specific type of data, making it better at that task (though potentially worse at others).

Test case: book example

The first test case is an image of a book (see image below). What’s interesting is that the measurements and forces are given with symbols, and then the values are provided below. You can also see the x and y axis with arrows and numbers, which could be distracting.


Anande Bergman


The agent did a very good job. Dimensions, forces, boundary conditions, and section properties are correct. The only issue is that element 8 is pointing in the wrong direction, which is something I ask the agent to correct, and it did.

Test case: AutoCAD drawing

This technical drawing has many more elements than the first case (see image below). You can also see many numerical annotations, which could be distracting.


Anande Bergman


Again, the agent did a great job. Dimensions and forces are perfect. Notice how the agent understands that, for example, the force 60k is 60,000 N. The only error I could spot is that, while the supports are placed at the correct location, two of them should be rolling instead of fixed, but given how small the symbols are, this is very impressive. Note that the agent gets a low-resolution (1,600 x 400 pixel) PNG image, not a real CAD file.

Test case: transmission tower

This is definitely the most challenging of the three trusses, and all data is in the text. It also requires the agent to do a lot of math. For example, the forces are at an angle, so it needs to calculate the x and y components of each force. It also needs to calculate x and y positions of nodes by adding different measurements like this: x = a + a + b + a + a.

As you can see in the image below, this was a bit too much of a challenge for our improvised truss vision agent, and for more serious jobs, we need specialist agents. Now, in defence of the agent, the image size was quite small (700 x 600 pixels), so maybe with larger images and better prompts, it would do a better job.


Anande Bergman


An open-source agent for you

I’ve created a simplified version of this agent that demonstrates the core concepts we’ve discussed. This implementation focuses on the essential components:

  • A basic terminal interface for interaction
  • Core functionality for truss analysis
  • Integration with the image analysis and FEM tools

The code is intentionally kept minimal to make it easier to understand and experiment with. You can find it in this GitHub repository. This simplified version is particularly useful for:

  • Understanding how AI agents can integrate with engineering tools
  • Learning how to structure agent-based systems
  • Experimenting with different approaches to truss analysis

While it doesn’t include all the features of the full implementation, it provides a solid foundation for learning and extending the concept. You can use it as a starting point to build your own specialised engineering agents. See video below.



Conclusions

After building and testing this truss analysis agent, here are my key takeaways:

1) AI agents are game changers for engineering workflows

  • They can handle ambiguous inputs like hand-drawn sketches
  • They adapt to different ways of describing problems
  • They can combine information from multiple sources to solve complex tasks

2) Reliability comes from smart architecture

  • Let AI handle the flexible, creative parts
  • Use validated engineering tools for critical calculations
  • Keep engineers in control of key decisions

3) The future is specialised

  • Instead of one large AI trying to do everything
  • Create focused agents for specific engineering tasks
  • Connect them into powerful workflows

4) Getting started is easier than you think

  • Modern LLMs provide a great foundation
  • Tools and APIs are readily available
  • Start small and iterate

Remember: AI agents aren’t meant to replace engineering judgment — they’re tools to help us work more efficiently while maintaining the reliability our profession demands. By combining AI’s flexibility with validated engineering tools and human oversight, we can create powerful solutions that respect our professional responsibilities.

I hope you’ll join me in exploring what’s possible!

Resources


About the author

Anande Bergman is a product strategist and startup founder who has contributed to multiple successful tech ventures, including a globally-scaled engineering automation platform.

With a background in aerospace engineering and a passion for innovation, he specialises in developing software and hardware products and bringing them to market.

Drawing on his experience in both structural engineering and technology, he writes about how emerging technologies can enhance professional practices while maintaining industry standards of reliability.

The post AI agents for civil engineers appeared first on AEC Magazine.

]]>
https://aecmag.com/civil-engineering/ai-agents-for-civil-engineers/feed/ 0
AI: Information Integrity https://aecmag.com/ai/ai-information-integrity/ https://aecmag.com/ai/ai-information-integrity/#disqus_thread Wed, 16 Apr 2025 05:00:12 +0000 https://aecmag.com/?p=23410 How to harness the power of LLMs without losing sight of critical thinking

The post AI: Information Integrity appeared first on AEC Magazine.

]]>
As AI reshapes how we engage with information, Emma Hooper, head of information management strategy at RLB Digital, explores how we can refine large language models to improve accuracy, reduce bias, and uphold data integrity — without losing the essential human skill of critical thinking

In a world where AI is becoming an increasingly integral part of our everyday lives, the potential benefits are immense. However, as someone with a background in technology — having spent my career producing, managing or thinking about information — I continue to contemplate how AI will alter our relationship with information and how the integrity and quality of data will be managed.

Understanding LLMs

AI is a broad field focused on simulating human intelligence, enabling machines to learn from examples and apply this learning to new situations. As we delve deeper into its sub-types, we become more detached from the inner workings of these models, and the statistical patterns they use become increasingly complex. This is particularly relevant with large language models (LLMs), which generate new content based on training data and user instructions (prompts).

A large language model (LLM) uses a transformer model, that is a specific type of neural network. These models learn patterns and connections from words or phrases, so the more examples they are fed, the more accurate they become. Consequently, they require vast amounts of data and significant computational power, which puts considerable pressure on the environment. These models power tools such as ChatGPT, Gemini, and Claude.


Find this article plus many more in the March / April 2025 Edition of AEC Magazine
👉 Subscribe FREE here 👈

The case of DeepSeek-R1

DeepSeek-R1 which has recently been in the news, demonstrates how constraints can drive innovation through good old-fashioned problem-solving. This open-source LLM uses rule-based reinforcement learning, making it cheaper and less compute-intensive to train compared to more established models.

However, since it is an LLM it still faces limitations in output quality. However, when it comes to accuracy, LLMs are statistical models that operate based on probabilities. Therefore, their responses are limited to what they’ve been trained on. They perform well when operating within their dataset, but if there are gaps or they go out of scope, inaccuracies or hallucinations can occur.

Inaccurate information is problematic when reliability is crucial, but trust in quality isn’t the only issue. General LLMs are trained on internet content, but much domain-specific knowledge isn’t captured online or is behind downloads/paywalls, so we’re missing out on a significant chunk of knowledge.

Training LLMs: the built environment

Training LLMs is resource-intensive and requires vast amounts of data. However, data sharing in the built environment is limited, and ownership is often debated. This raises several questions in my mind: Where does the training data come from? Do trainers have permission to use it? How can organisations ensure their models’ outputs are interoperable? Are SMEs disadvantaged due to limited data access? How can we reduce bias from proprietary terminology and data structures? Will the vast variation hinder the ability to spot correct patterns?

With my information manager hat on, without proper application and understanding it’s not just rubbish in and rubbish out, it’s rubbish out on a huge scale that is all artificial and completely overwhelms us.

How do we improve the use of LLMs?

There are techniques such as Retrieval Augmented Generation (RAG), that use vector databases to retrieve relevant information from a specific knowledge base. This information is used within the LLM prompt to provide outputs that are much more relevant and up to date. Having more control over the knowledge base ensures the sources are known and reliable.

This leads to an improvement, but the machine still doesn’t fully understand what it’s being asked. By introducing more context and meaning, we might achieve better outputs. This is where returning to information science and using knowledge graphs can help.

A knowledge graph is a collection of interlinked descriptions of things or concepts. It uses a graph-structured data model within a database to create connections – a web of facts. These graphs link many ideas into a cohesive whole, allowing computers to understand real world relationships much more quickly. They are underpinned by ontologies, which provide a domain-focused framework to give formal meaning. This meaning, or semantics, is key. The ontology organises information by defining relationships and concepts to help with reasoning and inference.

Knowledge graphs enhance the RAG process by providing structured information with defined relationships, creating more context-enriched prompts. Organisations across various industries are exploring how to integrate knowledge graphs into their enterprise data strategies. So much so they even made it onto the Gartner Hype Cycle on the slope of enlightenment.

The need for critical thinking

From an industry perspective, semantics is not just where the magic lies for AI; it is also crucial for sorting out the information chaos in the industry. The tools discussed can improve LLMs, but the results still depend on a backbone of good information management. This includes having strategies in place to ensure information meets the needs of its original purpose and implementing strong assurance processes to provide governance.

Therefore, before we move too far ahead, I believe it’s crucial for the industry to return to the theory and roots of information science. By understanding this, we can lay strong foundations that all stakeholders can work from, providing a common starting point and a sound base to meet AI halfway and derive the most value from it.

Above all it’s important to not lose sight that this begins and ends with people and one of the greatest things we can ever do is to think critically and keep questioning!

The post AI: Information Integrity appeared first on AEC Magazine.

]]>
https://aecmag.com/ai/ai-information-integrity/feed/ 0
Infraspace: reimagining civil infrastructure design https://aecmag.com/civil-engineering/infraspace-reimagining-civil-infrastructure-design/ https://aecmag.com/civil-engineering/infraspace-reimagining-civil-infrastructure-design/#disqus_thread Wed, 16 Apr 2025 05:00:22 +0000 https://aecmag.com/?p=23334 Civil engineering software startup Infraspace is transforming early-stage design using generative design and AI

The post Infraspace: reimagining civil infrastructure design appeared first on AEC Magazine.

]]>
Greg Corke caught up with Andreas Bjune Kjølseth, CEO of Infraspace, to explore how the civil engineering software startup is looking to transform early-stage design using generative design and AI

In the world of infrastructure design, traditional processes have long been plagued by inefficiencies and fragmentation. That’s the view of engineer turned software developer Andreas Bjune Kjølseth, CEO of Norwegian startup Infraspace. “Going from an idea to actually having a decision basis can be a quite tedious process,” he explains.

Four years ago, Kjølseth left his career in civil engineering consulting and founded Infraspace, to develop a brand new generative design tool for civil infrastructure alignments – road, rail or power networks. In his years as an engineer and BIM manager Kjølseth was left frustrated by the limitations of traditional processes. Civil engineers commonly must navigate multiple software tools, explains Kjølseth – sketching in one platform, generating 3D models in another, using GIS for analysis on land take and environmental impact, and then manually assembling, comparing and presenting alternatives.


Find this article plus many more in the March / April 2025 Edition of AEC Magazine
👉 Subscribe FREE here 👈



Infraspace aims to unify this fragmented workflow within a single, cloudbased platform. The software is primarily designed to tackle the initial phases of linear civil infrastructure projects, using an outcome-based approach, as Kjølseth explains. “Users can define where they want the generative AI engine to explore alternatives and define the outcomes, such as, ‘I want options with the least possible construction costs, shortest travel time or length, and the least land take in certain areas.’ Then the algorithm will quickly explore opportunities to make better solutions.”

The Infraspace cloud platform generates thousands of alternatives within minutes, enabling engineers to explore options they might not have considered if done manually.


Infraspace
Design options are presented as a 3D model alongside a KPI analytics dashboard

Infraspace
Infraspace can be used on a variety of civil infrastructure alignment projects – road, rail or power networks

Design options are displayed via an intuitive web-based interface, featuring a 3D model alongside an analytics dashboard with key performance indicators (KPIs) such as cost, route length, land take, and cut-and-fill volumes.

The system can also be used to assess the environmental impact of proposed designs, including carbon footprint, viewshed, noise, and which buildings or areas might be affected.

Based on this information engineers can quickly compare and evaluate multiple design alternatives, then use the software to refine designs further. As the software is cloud based, this makes it easier for multiple stakeholders to understand the consequences quicker, explains Kjølseth

“The typical project manager often has limited access to advanced CAD, BIM or analysis software. With Infraspace they can quickly log into their projects in their browser and see the 3D models together with the analytics instantly,” he says. “It’s also possible to invite external stakeholders into the project to explore a selected number of alternatives.”


Infraspace
Infraspace can quickly assess the potential environmental impact of proposed designs

Project seeds

To start a project, users can pull in data from various sources, such as Mapbox or Google, or upload custom digital terrain models, bedrock surface models, or GIS data.

The design can then be kickstarted in several ways. An engineer could simply define the start and end point of an alignment, then let the software work out the best alternatives based on set goals. Alternatively, an engineer can define geometric constraints—such as sketching a corridor or marking environmentally protected areas as off-limits.

Users can define where they want the generative AI engine to explore alternatives and define the outcomes, such as, ‘I want options with the least possible construction costs, shortest travel time or length, and the least land take in certain areas’ – Andreas Bjune Kjølseth, CEO, Infraspace

The system is not limited to blank slate designs. It can also import alignments from traditional infrastructure design tools like AutoCAD Civil 3D and use them as a basis for optimisation. As Kjølseth explains, some engineers are even just using the platform for its analytical capabilities, to get fast feedback on traditionally crafted designs. The software offers import / export for a range of formats including LandXML, IFC, OBJ, BCF, glTF, DXF and others.

Adaptability across geographies

Infraspace is not hard coded for specific national design standards, but as Kjølseth explains, the platform captures the fundamental mechanisms of infrastructure design. It allows engineers to define geometric constraints, set curve radii, specify vertical alignment parameters, and adapt to different project types including roads, railways, and power transmission lines. It can handle projects with varying levels of design freedom, from short access roads to expansive highway corridors.

Designed by engineers, for engineers

For civil engineers seeking to streamline their design process, reduce environmental impact, and explore more design options, faster, Infraspace offers an interesting alternative to traditional fragmented workflows. Most importantly, with a team combining civil engineering expertise and software development skills, it’s clear the company understands the nuances of infrastructure design.

While Infraspace is currently focused on early-stage design and optimisation, its ambitions extend beyond. “We will continue to add more features as we go,” says Kjølseth. “I see that generative design as a concept and the platform we have, can definitely be applied to many use cases — during the latter stages of a project, and to even more complex problems.”


Main image: The generative AI engine can deliver thousands of design options in minutes

The post Infraspace: reimagining civil infrastructure design appeared first on AEC Magazine.

]]>
https://aecmag.com/civil-engineering/infraspace-reimagining-civil-infrastructure-design/feed/ 0
Higharc AI 3D BIM model from 2D sketch https://aecmag.com/bim/higharc-ai-3d-bim-model-from-2d-sketch/ https://aecmag.com/bim/higharc-ai-3d-bim-model-from-2d-sketch/#disqus_thread Wed, 16 Apr 2025 05:00:07 +0000 https://aecmag.com/?p=23466 A cloud-based design solution for US timber frame housing presents impressive new AI capabilities

The post Higharc AI 3D BIM model from 2D sketch appeared first on AEC Magazine.

]]>
In the emerging world of BIM 2.0, there will be generic new BIM tools and expert systems, dedicated to certain building types. Higharc is a cloud-based design solution for US timber frame housing. The company recently demonstrated impressive new AI capabilities

While AI is in full hype cycle and not a day passes without some grandiose AI claim, there are some press releases that raise the wizzened eyebrows at AEC Magazine HQ. North Carolina-based start-up, Higharc, has demonstrated a new AI capability which can automatically convert 2D hand sketches to 3D BIM models within its dedicated housing design system. This type of capability is something that several generic BIM developers are currently exploring in R&D.

Higharc AI, currently in beta, uses visual intelligence to auto-detect room boundaries and wall types by analysing architectural features sketched in plan. In a matter of minutes, the software then creates a correlated model comprising all the essential 3D elements that were identified in the drawing – doors, windows, and fixtures.


Find this article plus many more in the March / April 2025 Edition of AEC Magazine
👉 Subscribe FREE here 👈

Everything is fully integrated with Higharc’s existing auto-drafting, estimating, and sales tools, so that construction documents, take-offs, and marketing collateral can be automatically generated once the design work is complete.

In one of the demonstrations we have seen, a 2D sketch of a second floor is imported, analysed and then automatically generates all the sketched rooms and doors, with interior and exterior walls and windows. The AI generated layout even means the roof design adapts accordingly. Higharc AI is now available via a beta program to select customers.



Marc Minor, CEO and co-founder of Higharc explains the driving force behind Higharc AI. “Every year, designers across the US waste weeks or months in decades-old CAD software just to get to a usable 3D model for a home,” he says.

“Higharc AI changes that. For the first time, generative AI has been successfully applied to BIM, eliminating the gap between hand sketches and web-based 3D models. We’re working to radically accelerate the home design process so that better homes can be built more affordably.”

AI demo

In the short video provided by Higharc, we can see a hand drawn sketch imported into the Autolayout tool. The sketch is a plan view of a second floor, with bedrooms, bathrooms and stairs with walls, doors and windows indicated. There are some rough area dimensions and handwritten notes, denoting room allocation type. The image is then analysed. The result is an opaque overlay, with each room (space) tagged appropriately, and a confirmation of how many rooms it found. There are settings for rectangle tolerance, minimum room areas. The next phase is to generate the rooms from this space plan.

We now switch to Higharc’s real-time rendered, modelling and drawing environment, where each room is inserted on the second floor of an existing single floor residential BIM model, where walls, windows, doors and stairs are added, and materials are applied. This is simultaneously referencing an image of the sketch. The accurate BIM model has been created, combining traditional modelling with AI sketch-to-BIM generation.

What is Higharc?

Founded in 2018, Higharc develops a tailored cloud-based BIM platform, specifically designed to automate and integrate the US housing market, streamlining the whole process of design, sales, and constructing new homes.

Higharc is a service sold to home builders, that provides a tailored solution which integrates 3D parametric modelling, the auto creation of drawings, 3D visualisations, material quantities and costing estimates, related construction documents and planning permit application. AEC Magazine looked at the development back in 2022.

The company’s founders, some of which were ex-Autodesk employees, recognised that there needed to be new cloud-based BIM tools and felt the US housing market offered a greenfield opportunity, as most of the developers and construction firms in this space had completely avoided the BIM revolution, and were still tied to CAD and 2D processes. With this new concept Higharc offered construction firms easy to learn design tools, which even prospective house buyers could use to design their dream homes. As the Higharc software models every plank and timber frame, accurate quantities can be connected to ERP systems for immediate and detailed pricing for every modification to the design.

The company claims its technology enhances efficiency, accelerating a builder’s time to market by two to three times, reducing the timeline for designing and launching new plots by 75% (approximately 90 days). Higharc also claims that plan designs and updates are carried out 100 times faster than with traditional 2D CAD software.

To date, Higharc has raised $80 million and has attracted significant investment and support from firms such as Home Depot Ventures, Standard Investments, and former Autodesk CEO Carl Bass. The company has managed to gain traction in the US market and is being used to build over 40,000 homes annually, representing $19 billion in new home sales volume.

While Higharc’s first go to market was established house building firms, the company has used money raised to expand its reach to address those who want to design and build their own homes. The investment by Home Depot would also indicate that the system will integrate with the popular local building merchants, so selfbuilders can get access to more generic material supply information. The company also plans to extend the building types it can design, eventually adding retail and office to its residential origins.

In conversation

After the launch, AEC Magazine caught up with co-founder Michael Bergin and company CEO Marc Minor to dig a little deeper into the origin of the AI tool and how it’s being used. We discovered that this is probably the most useful AI introduction in any BIM solution we have seen to date, as it actually solves a real world problem – not just a nice to have of demoware.

The only reason we were able to do it, is because of what Higharc is in the first place. It’s a data-first BIM system, built for the web from the ground up

In previous conversations with Higharc, it became apparent that the company had become successful, almost too successful, as onboarding new clients to the system was a bottleneck. Obviously, every house builder has different styles and capabilities which have to be captured and encoded in Higharc but there was also the issue of digital skill sets. Typically, firms that were opting to use Higharc were not traditional BIM firms – they were housebuilders, more likely to use AutoCAD or a hand drawn sketch, than have much understanding of BIM or modelling concepts. It turns out that the AI sketch tool originated out of a need to include the non-digital, but highly experienced, house building workforce.


Mark Minor: The sketch we used to illustrate at launch is a real one, from one of our customers. We have a client, a very large builder in Texas who builds 4,000 houses per year just in Texas. They have a team of 45 or so designers and drafters, and they have a process that’s very traditional. They start on drawings boards, just sketching. They spend three months or so in conceptual design and eventually they’ll pass on their sketches to another guy who works on the computer, where he models in SketchUp, so they can do virtual prototype walk-throughs to really understand the building, the design choices, and then make changes to it.

The challenge here is that it takes a long time to go back and forth. We showed them this new AI sketch to model work we were doing, and they gave us one of their sketches for one of their homes that they’re working on. The results blew their mind. They said for them ‘this is huge’. They told us they can cut weeks or months from their conceptual stage and probably bring in more folks at the prototype walk-through stage. It’s a whole new way of interacting with design.

What makes this so special, and is the only reason we were able to do it, is because of what Higharc is in the first place. It’s a data-first BIM system, built for the web from the ground up. Because it’s data first, it means that we can not only generate a whole lot of synthetic data for training rapidly, but we really have a great target for a system like this – taking a sketch and trying to create something meaningful out of the sketch. It’s essentially trying to transform the sketch into our data model. And when you do that, you get all the other features and benefits of the Higharc system right on top of it.


Martyn Day: As the software processes the file, it seems to go through several stages. Is the first form finding?

Mark Minor: It’s not just form finding, actually, it’s mapping the rooms to particular data types. And those types carry with them all kinds of rules and settings.

Michael Bergin: At the conceptual / sketch design phase these are approximate dimensions. Once you’ve converted the rooms into Higharc, the model is extremely flexible. You can stretch all the rooms, you can scale them, and everything will replace itself and update automatically. We also have a grid resolution setting, so the sketch could even be a bubble diagram, or very rough lines, and you just set the grid resolution to be quite high, and you can still get a model out of that.

Higharc contains procedural logic, as to how windows are placed, how the foundation is placed, the relationships between the rooms. So the interaction that you see as the AI processes the sketch and makes the model, places the window, doors and the spaces between the rooms, that is all coming from rules that relate to the specifications for our builder.


Martyn Day: If doors collide, or designs do not comply with local codes, do you get alerted if you transgress some kind of design rule?

Michael Bergin: We have about 1,000 settings in Higharc that relate to the building that are to adjust for and align to issues of code compliance. When you get into automated rule checking, evaluating and digesting code rules and then applying that to the model, we have produced some exciting results in more of a research phase in that direction. There’s certainly lots of opportunities to express design logic and design rules, and we’ll continue to develop in that direction.

Mark Minor: One of the ways we use this, is we go to a home builder we want as a customer. In advance of having a sales chat, we’ll actually go to their website and screenshot one of their floor plans. We’ll pull it the AI tool and set it up as the house. We want to help folks understand that it’s not as painful and as hard as you might think. The whole BIM revolution happened in commercial, that’s kind of what’s happening in home building now. But 90% or more of all home builders use AutoCAD. We rarely come across Revit.


Martyn Day: I can see how you can bring non-digital housebuilders into the model creation side of things, where before everything would be handled by the computer expert. With this AI tool, does that mean suddenly everyone can contribute to the Higharc model?

Michael Bergin: Yes! That’s extremely important to us, bringing more of the business into the realm of the design, that’s really the core of our business. How do we bring the purchasing and the estimating user into the process of design? How do we take the operations user who’s scheduling all of the work to be done on the home into the design, because ultimately, they all have feedback. The sales people have feedback. The field team have feedback, but they’re all blocked out. They are always working through an intermediary, and perhaps through an email to a CAD operator. It goes into a backlog. We are cutting that distance between all the stakeholders in the design process and the artefact of the design has driven a lot of our development.

It’s exciting to see them engaging in the process, to see new opportunities opening up for them, which I think is broadly a great positive aspect of what’s happening with the AI revolution.


Martyn Day: You have focused on converting raster images, which is hard, as opposed to vector. But could you work with vector drawings?

Michael Bergin: While it would have been easier to use a vector representation to do the same AI conversion work, the reason that we did focus on raster was that vector would have been quite limiting. It would have blocked us out from using conceptual representations. If our customers are using a digital tool at all, they are building sketches in something like Figjam. In this early conceptual design stage, we have not seen the Rayon tools or really any of the new class of tools that the market is opening up for. Our market in US home builders tends to be the way that they’ve been doing things for some decades, and it works well for them, and we are fortunate that they have determined that Higharc is the right tool for their business.

Making it possible that the businesses process can change has required us to develop a lot of capabilities like integrating with the purchasing and estimation suite, integrating with the sales team, integrating with ERPs, really mirroring their business. Otherwise, I don’t think that we would have an excellent case for adoption of new tools in this industry.

The post Higharc AI 3D BIM model from 2D sketch appeared first on AEC Magazine.

]]>
https://aecmag.com/bim/higharc-ai-3d-bim-model-from-2d-sketch/feed/ 0
AI vectorisation to launch for HP Build Workspace https://aecmag.com/cad/hp-to-launch-ai-vectorisation-for-hp-build-workspace/ https://aecmag.com/cad/hp-to-launch-ai-vectorisation-for-hp-build-workspace/#disqus_thread Fri, 21 Mar 2025 11:57:40 +0000 https://aecmag.com/?p=23248 New addition to HP’s AEC-focused collaboration platform uses AI to convert raster images into CAD-editable drawings

The post AI vectorisation to launch for HP Build Workspace appeared first on AEC Magazine.

]]>
New addition to HP’s AEC-focused collaboration platform uses AI to convert raster images into CAD-editable drawings

In May 2025, HP plans to officially launch an AI vectorisation feature for its HP Build Workspace collaboration platform, first announced in September 2024.

According to HP, it will be the first solution to use AI for converting raster images into CAD-editable documents, saving hours of manual work per drawing. The system can detect lines, polylines, arcs, and text. Once text has been extracted and indexed, users can search on that data.

The conversion service comes with a simple editor, which allows users to change lines that were incorrectly converted from dashed into solid, connect lines that should have been snapped together, as well as clean, remove or add elements.

HP Build Workspace is also set to integrate more closely with the HP DesignJet family of large-format printers and scanners. According to HP, this enhanced connectivity will enable features such as scanning directly to HP Build Workspace for AI-powered vectorisation, improving communication and collaboration beyond traditional paper-based workflows.

HP is also targeting May 2025 for the launch of a Flatness Measurement Service for HP SitePrint, its autonomous three-wheeled robot that prints 2D plans directly onto the floors of construction sites.

The HP SitePrint Flatness Measurement Service will allow users to measure floor flatness and print elevation corrections directly onto the floor. HP says this eliminates the need for external elevation and flatness data processing, which is traditionally done in the back office before being communicated to field teams.

The service aims to consolidate four manual steps—marking information on the floor, capturing elevation data, processing the data, and relocating elevation details—into a single streamlined workflow.

HP SitePrint

The post AI vectorisation to launch for HP Build Workspace appeared first on AEC Magazine.

]]>
https://aecmag.com/cad/hp-to-launch-ai-vectorisation-for-hp-build-workspace/feed/ 0
AI and the future of arch viz https://aecmag.com/visualisation/ai-and-the-future-of-arch-viz/ https://aecmag.com/visualisation/ai-and-the-future-of-arch-viz/#disqus_thread Fri, 21 Feb 2025 09:00:39 +0000 https://aecmag.com/?p=23123 Streamlining workflows, enhancing realism, and unlocking new creative possibilities without compromising artistic integrity

The post AI and the future of arch viz appeared first on AEC Magazine.

]]>
Tudor Vasiliu, founder of architectural visualisation studio Panoptikon, explores the role of AI in arch viz, streamlining workflows, pushing realism to new heights, and unlocking new creative possibilities without compromising artistic integrity.

AI is transforming industries across the globe, and architectural visualisation (let’s call it ‘Arch Viz’) is no exception. Today, generative AI tools play an increasingly important role in an arch viz workflow, empowering creativity and efficiency while maintaining the precision and quality expected in high-end visuals.

In this piece I will share my experience and best practices for how AI is actively shaping arch viz by enhancing workflow efficiency, empowering creativity, and setting new industry standards.

Streamlining workflows with AI

AI, we dare say, has proven not to be a bubble or a simple trend, but a proper productivity driver and booster of creativity. Our team at Panoptikon and others in the industry leverage generative AI tools to the maximum to streamline processes and deliver higher-quality results.

Tools like Stable Diffusion, Midjourney and Krea.ai transform initial design ideas or sketches into refined visual concepts. Platforms like Runway, Sora, Kling, Hailuo or Luma can do the same for video.

With these platforms, designers can enter descriptive prompts or reference images, generating early-stage images or videos that help define a project’s look and feel without lengthy production times.

This capability is especially valuable for client pitches and brainstorming sessions, where generating multiple iterations is critical. Animating a still image is possible with the tools above just by entering a descriptive prompt, or by manipulating the camera in Runway.ml.

Sometimes, clients find themselves under pressure due to tight deadlines or external factors, while studios may also be fully booked or working within constrained timelines. To address these challenges, AI offers a solution for generating quick concept images and mood boards, which can speed up the initial stages of the visualisation process.

In these situations, AI tools provide a valuable shortcut by creating reference images that capture the mood, style, and thematic direction for the project. These AI-generated visuals serve as preliminary guides for client discussions, establishing a strong visual foundation without requiring extensive manual design work upfront.

Although these initial images aren’t typically production-ready, they enable both the client and visualisation team to align quickly on the project’s direction.

Once the visual direction is confirmed, the team shifts to standard production techniques to create the final, high-resolution images that would accurately showcase the full range of technical specifications that outline the design. While AI expedites the initial phase, the final output meets the high-quality standards expected for client presentations.

Dynamic visualisation

For projects that require multiple lighting or seasonal scenarios, Stable Diffusion, LookX or Project Dream allow arch viz artists to produce adaptable visuals by quickly applying lighting changes (morning, afternoon, evening) or weather effects (sunny, cloudy, rainy).

Additionally, AI’s ability to simulate seasonal shifts allows us to show a park, for example, lush and green in summer, warm-toned in autumn, and snow-covered in winter. These adjustments make client presentations more immersive and relatable.

Adding realism through texture and detail

AI tools can also enhance the realism of 3D renders. By specifying material qualities through prompts or reference images in Stable Diffusion, Magnific, and Krea, materials like wood, concrete, and stone, or greenery and people are quickly improved.

The tools add nuanced details like weathering to any surface or generate intricate enhancements that may be challenging to achieve through traditional rendering alone. The visuals become more engaging and give clients a richer sense of the project’s authenticity and realistic quality.

This step may not replace traditional rendering or post-production but serves as a complementary process to the overall aesthetic, bringing the image closer to the level of photorealism clients expect.

Bridging efficiency and artistic quality

While AI provides speed and efficiency, the reliance on human expertise for technical precision is mandatory. AI handles repetitive tasks, but designers need to review and refine each output so that the visuals meet the exact technical specifications provided by each project’s design brief.

Challenges and considerations

It is essential to approach the use of AI with awareness of its limitations and ethical considerations.

Maintaining quality and consistency: AI-generated images sometimes contain inconsistencies or unrealistic elements, especially in complex scenes. These outputs require human refinement to align with the project’s vision so that the result is accurate and credible.

Ethical concerns around originality: There’s an ongoing debate about originality in AI-generated designs, as many AI outputs are based on training data from existing works. We prioritise using AI as a support tool rather than a substitute for human creativity, as integrity is among our core values.

Future outlook: innovation with a human touch: Looking toward and past 2025, AI’s role in arch viz is likely to expand further – supporting, rather than replacing, human creativity. AI will increasingly handle technical hurdles, allowing designers to focus on higher-level creative tasks.

AI advancements in real-time rendering are another hot topic, expected to enable more immersive, interactive tours, while predictive AI models may suggest design elements based on client preferences and environmental data, helping studios anticipate client needs.

AI’s role in arch viz goes beyond productivity gains. It’s a catalyst for expanding creative possibilities, enabling responsive design, and enhancing client experiences. With careful integration and human oversight, AI empowers arch viz studios – us included – to push the boundaries of what’s possible while, at the same time, preserving the artistry and precision that define high-quality visualisation work.


About the author

Tudor Vasiliu is an architect turned architectural visualiser and the founder of Panoptikon, an award-winning high-end architectural visualisation studio serving clients globally. With over 18 years of experience, Tudor and his team help the world’s top architects, designers, and property developers realise their vision through high-quality 3D renders, films, animations, and virtual experiences. Tudor has been honoured with the CGarchitect 3D Awards 2019 – Best Architectural Image, and has led industry panels and speaking engagements at industry events internationally including the D2 Vienna Conference, State of Art Academy Days, Venice, Italy and Inbetweenness, Aveiro, Portugal – among others.


Main image caption: Rendering by Panoptikon for ‘The Point’, Salt Lake City, Utah. Client: Arcadis (Credit: Courtesy of Panoptikon, 2025)

The post AI and the future of arch viz appeared first on AEC Magazine.

]]>
https://aecmag.com/visualisation/ai-and-the-future-of-arch-viz/feed/ 0
Snaptrude builds in Excel-like interface https://aecmag.com/bim/snaptrude-builds-in-excel-like-interface/ https://aecmag.com/bim/snaptrude-builds-in-excel-like-interface/#disqus_thread Tue, 01 Apr 2025 12:32:20 +0000 https://aecmag.com/?p=23294 New 'Program mode' allows architects to quickly generate data-backed design concepts

The post Snaptrude builds in Excel-like interface appeared first on AEC Magazine.

]]>
New ‘Program mode’ allows architects to quickly generate data-backed design concepts with views, renders, and drawings

Snaptrude has built an Excel-like interface directly into its BIM authoring software, to make architectural programming simpler and allow architects to quickly generate data-backed design concepts with views, renders, and drawings.

With the new ‘Program’ mode every row, formula, and update is synced live with the 3D model, and vice versa. According to Snaptrude, this means architects don’t need to juggle separate spreadsheets, ensuring real-time accuracy and eliminating the need for manual cross-checking. Users can define custom formulas and rules to fit their specific building program needs.

‘Program’ mode works alongside Tables, which is billed as a new home for all kinds of structured information inside Snaptrude.

Tables includes an AI wizard, so users can ‘quickly generate’ or refine their program with an AI co-pilot.

“Over the last 18 months, we’ve started spending a lot of time with mid to large sized architectural firms across the US and globally as well. And one thing which we constantly kept hearing is Excel is everywhere, and it’s a huge part of everyone’s workflows, and it’s quite understandable,” said Altaf Ganihar, founder and CEO, Snaptrude.

“From programming to construction, everybody knows how to use it, it’s very easy to use, and everybody relies on it. So instead of fighting it, we said, let’s just embrace it, we built an Excel like interface directly into Snaptrude.”

Snaptrude Program mode is currently in early access.

The post Snaptrude builds in Excel-like interface appeared first on AEC Magazine.

]]>
https://aecmag.com/bim/snaptrude-builds-in-excel-like-interface/feed/ 0
Allplan acquires Manufacton to boost offsite https://aecmag.com/construction/allplan-acquires-manufacton-to-boost-offsite/ https://aecmag.com/construction/allplan-acquires-manufacton-to-boost-offsite/#disqus_thread Thu, 06 Mar 2025 10:37:15 +0000 https://aecmag.com/?p=23181 US firm’s AI and data-driven solutions designed to enhance offsite construction and prefabrication processes

The post Allplan acquires Manufacton to boost offsite appeared first on AEC Magazine.

]]>
US firm’s AI and data-driven solutions designed to enhance offsite construction and prefabrication processes

AEC software specialist Allplan, part of the Nemetschek Group, has acquired, Manufacton, the US developer of an offsite construction platform that provides real-time visibility to offsite production and optimises prefabrication processes through AI and data-driven decision-making.

According to Allplan, the acquisition will enable it to capitalize on the potential growth in the modular construction and Design for Manufacturing (DfMA) sectors, strengthen its position in the US market, and provide Manufacton with a platform to expand its presence in Europe and Asia Pacific.

“We are delighted to welcome the Manufacton team to the Allplan family,” said Eduardo Lazzarotto, chief product and strategy officer at Allplan.

“Manufacton is a great fit and a perfect complement to our existing portfolio of construction solutions. This acquisition enhances our expertise in covering the entire product lifecycle and gives us a strong competitive advantage in the rapidly growing modular construction and DfMA markets.”

Manufacton provides integrated project management software for offsite construction and prefabrication. Its solution, used by general and specialty trade contractors, as well as modular builders, combines manufacturing production and construction project management software.

According to Allplan, this enables contractors to ‘seamlessly manage and track’ offsite construction and modular fabrication throughout the construction process.

The post Allplan acquires Manufacton to boost offsite appeared first on AEC Magazine.

]]>
https://aecmag.com/construction/allplan-acquires-manufacton-to-boost-offsite/feed/ 0