Visualisation Archives - AEC Magazine https://aecmag.com/visualisation/ Technology for the product lifecycle Wed, 16 Apr 2025 15:02:43 +0000 en-GB hourly 1 https://wordpress.org/?v=6.6.2 https://aecmag.com/wp-content/uploads/2021/02/cropped-aec-favicon-32x32.png Visualisation Archives - AEC Magazine https://aecmag.com/visualisation/ 32 32 Twinmotion now supports Nvidia DLSS 4 https://aecmag.com/visualisation/twinmotion-now-supports-nvidia-dlss-4/ https://aecmag.com/visualisation/twinmotion-now-supports-nvidia-dlss-4/#disqus_thread Wed, 16 Apr 2025 15:01:45 +0000 https://aecmag.com/?p=23647 Neural rendering technology can deliver close to a 4x boost in frame rates

The post Twinmotion now supports Nvidia DLSS 4 appeared first on AEC Magazine.

]]>
Neural rendering technology can deliver close to a 4x boost in frame rates

Twinmotion 2025.1.1, the latest release of the real time rendering software from Epic Games, supports Nvidia DLSS 4, a suite of neural rendering technologies that uses AI to boost 3D performance.

Epic Games shows that when DLSS 4 is enabled in Twinmotion it can render almost four times as many frames per second (FPS) than when DLSS is set to off.

DLSS 4 uses a technology called Multi Frame Generation, an evolution of Single Frame Generation, which was introduced in DLSS 3.

Single Frame Generation uses the AI Tensor cores on Nvidia GPUs to interpolate one synthetic frame between every two traditionally rendered frames, improving performance by reducing the number of frames that need to be rendered by the GPU.

Multi Frame Generation extends this approach by using AI to generate up to three frames between each pair of rendered frames, further increasing frame rates. The technology is only available on Nvidia’s new Blackwell-based RTX GPUs, which have been architected specifically to better support neural rendering.

Multi Frame Generation can be used alongside Super Resolution, where AI upscales a lower-resolution frame to a higher resolution, and Ray Reconstruction, where AI is used to generate additional pixel data in ray-traced scenes. According to Nvidia, when all DLSS technologies are combined, 15 out of every 16 pixels in a frame can be generated by AI. This greatly reduces the computational demands of traditional rendering and significantly boosts overall performance.

Twinmotion 2025.1.1 includes several other features.

3D Grass material allows users to drag and drop five types of grass material onto any surface. The Configurations feature, first introduced in Twinmotion 2025.1 to allow users to build interactive 3D presentations that showcase different variations of a project, has also been enhanced. Users can now export configurators to Twinmotion Cloud, for easy sharing, and use a mesh as a trigger — for example clicking on a door handle to open a door.

The post Twinmotion now supports Nvidia DLSS 4 appeared first on AEC Magazine.

]]>
https://aecmag.com/visualisation/twinmotion-now-supports-nvidia-dlss-4/feed/ 0
Lumion View ‘design companion’ viz tool debuts https://aecmag.com/visualisation/lumion-view-design-companion-viz-tool-debuts/ https://aecmag.com/visualisation/lumion-view-design-companion-viz-tool-debuts/#disqus_thread Fri, 04 Apr 2025 13:41:34 +0000 https://aecmag.com/?p=23304 'Path traced' plug-in now available in Early Access for Sketchup, with Revit, Rhino and Archicad to follow

The post Lumion View ‘design companion’ viz tool debuts appeared first on AEC Magazine.

]]>
Easy-to-use viz plug-in now available in Early Access for Sketchup, with Revit, Rhino and Archicad to follow

Lumion has unveiled Lumion View, a new visualisation plug-in that allows architects to visualize their projects in a path-traced real-time viewport without having to leave their primary modelling environment.

The software is billed as an early-stage design companion, purpose-built for design exploration by delivering live rendered feedback to design choices. Any geometry or material changes that are made in CAD/BIM tool are automatically reflected in the Lumion View window.

“We think of Lumion View as a new viewport to a CAD tool,” says Artur Brzegowy, product manager, Lumion. “It’s like a viewport on steroids. It can help you to make better design decisions, instead of creating a final beautiful render.”

Features include real-time ray tracing, conceptual render styles (clay, wood, Styrofoam, glossy), sun studies, and  material adjustments directly within the CAD environment. Users can also ‘quickly produce’ up to 4K renders for sharing visuals with clients. VR walkthroughs and a Mac version are on the roadmap.

Lumion View is currently available in Early Access for SketchUp, but there are plans to expand to Revit later this year, followed by Archicad, Rhino, and other platforms.

Pricing has not yet been announced, but the company has said that Lumion View will be ‘very affordable’ and will also run on ‘much lower grade hardware’ than other viz tools.

Lumion View is positioned as a complementary solution to Lumion Pro, which will continue to be Lumion’s ‘high-quality, high-end visualisation platform for the architecture community.’

Lumion Pro subscribers get free access to Lumion View, and for every Lumion Pro seat, up to 10 team members can be invited  to use Lumion View—at no extra cost — until October 31, 2025.


The post Lumion View ‘design companion’ viz tool debuts appeared first on AEC Magazine.

]]>
https://aecmag.com/visualisation/lumion-view-design-companion-viz-tool-debuts/feed/ 0
AI and the future of arch viz https://aecmag.com/visualisation/ai-and-the-future-of-arch-viz/ https://aecmag.com/visualisation/ai-and-the-future-of-arch-viz/#disqus_thread Fri, 21 Feb 2025 09:00:39 +0000 https://aecmag.com/?p=23123 Streamlining workflows, enhancing realism, and unlocking new creative possibilities without compromising artistic integrity

The post AI and the future of arch viz appeared first on AEC Magazine.

]]>
Tudor Vasiliu, founder of architectural visualisation studio Panoptikon, explores the role of AI in arch viz, streamlining workflows, pushing realism to new heights, and unlocking new creative possibilities without compromising artistic integrity.

AI is transforming industries across the globe, and architectural visualisation (let’s call it ‘Arch Viz’) is no exception. Today, generative AI tools play an increasingly important role in an arch viz workflow, empowering creativity and efficiency while maintaining the precision and quality expected in high-end visuals.

In this piece I will share my experience and best practices for how AI is actively shaping arch viz by enhancing workflow efficiency, empowering creativity, and setting new industry standards.

Streamlining workflows with AI

AI, we dare say, has proven not to be a bubble or a simple trend, but a proper productivity driver and booster of creativity. Our team at Panoptikon and others in the industry leverage generative AI tools to the maximum to streamline processes and deliver higher-quality results.

Tools like Stable Diffusion, Midjourney and Krea.ai transform initial design ideas or sketches into refined visual concepts. Platforms like Runway, Sora, Kling, Hailuo or Luma can do the same for video.

With these platforms, designers can enter descriptive prompts or reference images, generating early-stage images or videos that help define a project’s look and feel without lengthy production times.

This capability is especially valuable for client pitches and brainstorming sessions, where generating multiple iterations is critical. Animating a still image is possible with the tools above just by entering a descriptive prompt, or by manipulating the camera in Runway.ml.

Sometimes, clients find themselves under pressure due to tight deadlines or external factors, while studios may also be fully booked or working within constrained timelines. To address these challenges, AI offers a solution for generating quick concept images and mood boards, which can speed up the initial stages of the visualisation process.

In these situations, AI tools provide a valuable shortcut by creating reference images that capture the mood, style, and thematic direction for the project. These AI-generated visuals serve as preliminary guides for client discussions, establishing a strong visual foundation without requiring extensive manual design work upfront.

Although these initial images aren’t typically production-ready, they enable both the client and visualisation team to align quickly on the project’s direction.

Once the visual direction is confirmed, the team shifts to standard production techniques to create the final, high-resolution images that would accurately showcase the full range of technical specifications that outline the design. While AI expedites the initial phase, the final output meets the high-quality standards expected for client presentations.

Dynamic visualisation

For projects that require multiple lighting or seasonal scenarios, Stable Diffusion, LookX or Project Dream allow arch viz artists to produce adaptable visuals by quickly applying lighting changes (morning, afternoon, evening) or weather effects (sunny, cloudy, rainy).

Additionally, AI’s ability to simulate seasonal shifts allows us to show a park, for example, lush and green in summer, warm-toned in autumn, and snow-covered in winter. These adjustments make client presentations more immersive and relatable.

Adding realism through texture and detail

AI tools can also enhance the realism of 3D renders. By specifying material qualities through prompts or reference images in Stable Diffusion, Magnific, and Krea, materials like wood, concrete, and stone, or greenery and people are quickly improved.

The tools add nuanced details like weathering to any surface or generate intricate enhancements that may be challenging to achieve through traditional rendering alone. The visuals become more engaging and give clients a richer sense of the project’s authenticity and realistic quality.

This step may not replace traditional rendering or post-production but serves as a complementary process to the overall aesthetic, bringing the image closer to the level of photorealism clients expect.

Bridging efficiency and artistic quality

While AI provides speed and efficiency, the reliance on human expertise for technical precision is mandatory. AI handles repetitive tasks, but designers need to review and refine each output so that the visuals meet the exact technical specifications provided by each project’s design brief.

Challenges and considerations

It is essential to approach the use of AI with awareness of its limitations and ethical considerations.

Maintaining quality and consistency: AI-generated images sometimes contain inconsistencies or unrealistic elements, especially in complex scenes. These outputs require human refinement to align with the project’s vision so that the result is accurate and credible.

Ethical concerns around originality: There’s an ongoing debate about originality in AI-generated designs, as many AI outputs are based on training data from existing works. We prioritise using AI as a support tool rather than a substitute for human creativity, as integrity is among our core values.

Future outlook: innovation with a human touch: Looking toward and past 2025, AI’s role in arch viz is likely to expand further – supporting, rather than replacing, human creativity. AI will increasingly handle technical hurdles, allowing designers to focus on higher-level creative tasks.

AI advancements in real-time rendering are another hot topic, expected to enable more immersive, interactive tours, while predictive AI models may suggest design elements based on client preferences and environmental data, helping studios anticipate client needs.

AI’s role in arch viz goes beyond productivity gains. It’s a catalyst for expanding creative possibilities, enabling responsive design, and enhancing client experiences. With careful integration and human oversight, AI empowers arch viz studios – us included – to push the boundaries of what’s possible while, at the same time, preserving the artistry and precision that define high-quality visualisation work.


About the author

Tudor Vasiliu is an architect turned architectural visualiser and the founder of Panoptikon, an award-winning high-end architectural visualisation studio serving clients globally. With over 18 years of experience, Tudor and his team help the world’s top architects, designers, and property developers realise their vision through high-quality 3D renders, films, animations, and virtual experiences. Tudor has been honoured with the CGarchitect 3D Awards 2019 – Best Architectural Image, and has led industry panels and speaking engagements at industry events internationally including the D2 Vienna Conference, State of Art Academy Days, Venice, Italy and Inbetweenness, Aveiro, Portugal – among others.


Main image caption: Rendering by Panoptikon for ‘The Point’, Salt Lake City, Utah. Client: Arcadis (Credit: Courtesy of Panoptikon, 2025)

The post AI and the future of arch viz appeared first on AEC Magazine.

]]>
https://aecmag.com/visualisation/ai-and-the-future-of-arch-viz/feed/ 0
Reviving Brutalism: preserving the legacy of concrete giants https://aecmag.com/visualisation/reviving-brutalism-preserving-the-legacy-of-concrete-giants/ https://aecmag.com/visualisation/reviving-brutalism-preserving-the-legacy-of-concrete-giants/#disqus_thread Wed, 16 Apr 2025 05:00:01 +0000 https://aecmag.com/?p=23354 How 3D visualisation can help change the conversation around Brutalism

The post Reviving Brutalism: preserving the legacy of concrete giants appeared first on AEC Magazine.

]]>
Roderick Bates of Chaos highlights how 3D visualisation can help change the conversation around Brutalism – offering practical pathways for adaptive reuse and public engagement

Brutalism, one of the most polarising architectural styles, with its bold concrete forms and oversized design, returned to the spotlight with The Brutalist – a newly released film exploring the intertwined fate of a Brutalist architect and his buildings.

The film’s revival of interest in Brutalism highlights the circular nature of unique architectural trends. While some admire Brutalism for its raw, imposing honesty, others see it as an eyesore that clashes with the modern architectural landscape – an ongoing debate since Brutalism first emerged.

Given ever-changing architectural trends, we should not be so hasty to demolish these buildings based on contemporary aesthetic judgement, as they may come back into favour in a decade. Cultural moments, like this film, can shift public perception with the representation of the artform, allowing the public to once again understand and see the beauty in Brutalism – before it’s lost to the wrecking ball.


Find this article plus many more in the March / April 2025 Edition of AEC Magazine
👉 Subscribe FREE here 👈

Why preservation over demolition?

Cultural legacy and historical impact: Preserving buildings with a rich history brings incredible cultural value reflecting the culture and lived experiences at their time of construction. Brutalism emerged in the UK during the 1950s as part of post-war reconstruction. With Britain left in ruins and limited funds for rebuilding, architecture prioritised functionality and cost-effectiveness, shaping the stark aesthetic of the movement.

Inexpensive modular elements, concrete and reinforced steel were used for institutional and residential buildings that needed to be rebuilt quickly to return the UK to a liveable state. As historical symbols of the country’s resilience in a post-war era, these buildings should not be so readily dismissed over debates on aesthetics as they are cultural icons embodying Britain’s resilience and commitment to social progress, accessibility and equality. Rather than simply demolishing these physical manifestations of the British spirit, efforts should be directed toward preservation through thoughtful repurposing, ensuring their integration within the modern architectural landscape.


Chaos


Chaos


Environmental impact: The preservation and adaptive reuse of Brutalist buildings, however, presents considerable challenges. In many instances, building codes and regulations inhibit retrofitting efforts to such a degree that demolition is the only solution. Where renovation is possible, listed Brutalist structures pose a distinct set of challenges, with the buildings presenting a level of energy performance well below modern energy efficiency standards and the required modifications to make them both efficient and usable running afoul of conservation guidelines.

Looking beyond challenging operational efficiency, the preservation of Brutalist buildings does have a compelling carbon argument. The clinking of lime to produce the cement in concrete is a massive source of carbon emissions, which is why architects and designers often prefer more environmentally friendly materials. Brutalist buildings, due to their impressive mass and extensive use of concrete are vastly carbon-intensive. However, since the carbon has already been emitted during construction in the 50s, preserving these buildings rather than demolishing them prevents additional emissions from new construction.

Preserving Brutalist buildings conserves resources by extending the lifespan of structures where the bulk of carbon emissions have already occurred. This makes adaptive reuse not only the right choice historically and culturally, but also the more environmentally responsible option.

Contemporary meets traditional

Contemporary architects are already leading the repurposing charge by reimagining Brutalist principles and blending them with modern, sustainable materials while retaining core stylistic elements. Raw concrete used in existing Brutalist structures is being combined with materials like wood and glass to soften its boldness, creating a more artistic interplay of textures and materials. This softening of Brutalism’s rough edges has enabled it to integrate more seamlessly into the surrounding landscapes.

Moreover, unlike many other historical buildings, Brutalist structures are highly adaptable for modern use. Their mass and robust design not only provides acoustic isolation, a desirable trait in the context of residential reuse, but it also makes slab penetrations for the running of pipes, ducts and other systems through walls and floors, much easier. When repurposing contemporary buildings with a lightweight structure every penetration must be carefully considered, which fortunately isn’t the case with the overbuilt brutalist structures.

Changing perceptions

Repurposing any building isn’t cheap, and before investing in an adaptive reuse project, it’s essential that the public, including the potential future residents, understand both the vision for the final result and the motive behind repurposing over demolition. Otherwise, in 10 years, we could find ourselves facing the same debate over aesthetics and potential demolition.


Chaos


3D visualisation technology enables designers to produce accurate digital representations of existing structures, while incorporating proposed design modifications, new features, and materials, creating an accurate reflection of what the project will actually look like, once completed. This greatly facilitates the presentation of the design to the public, allowing for stakeholder feedback to be gathered and integrated early in the process – avoiding costly delays, and even more importantly, potential commercial failure.

Secondly, to authentically experience the raw scale of a brutalist building and resulting emotional impact of Brutalism, one must visit the building in person, though this is not always possible. Interactive renders offer a solution, allowing both designers and the public to virtually experience being towered over by the building’s mass. On an entirely different scale, the intricate patterns of board-formed concrete is a subtle yet significant feature of Brutalist buildings, that can only be appreciated either through direct experience or with high quality renders that capture the dynamic nuances of lighting and materials, accurately conveying the beauty and emotion of Brutalism to stakeholders.

The visual impression of Brutalist buildings is incredibly strong. This is key to their appeal, but it can be difficult to visualise the buildings taking on a new life, much less as a welcoming apartment building or office. A highquality visualisation can allow people to see a new reality, allowing them to experience, virtually, the beauty and emotion of brutalism, hopefully shifting public perception in the process.

The future of Brutalism

The future of Brutalist buildings is unclear, but it is evident that demolition, without considering alternatives, would be a waste. A waste of resources, of cultural history, and of beautiful buildings that contribute an emotional element to the urban landscape they inhabit. Reimagining and embracing Brutalism is not only about preserving the past but also about recognising its relevance in the present and the cultural values these structures embody. In our current culture where architecture strives for sustainable design solutions, we must look at what we already have and repurpose it to meet modern needs, establishing an important thread tying the old and the new.

The distaste for Brutalism shows the beauty of these designs was never clearly communicated. By making these repurposed designs accessible through emotive, immersive visualisations, the door to public appreciation is opened – before large budgets are spent on redevelopment. At Chaos we strive to democratise the design process, making it accessible to all stakeholders by simplifying complex styles and revealing their inner, timeless beauty.


About the author

Roderick Bates is head of corporate development at Chaos, a specialist in design and visualisation technology.

The post Reviving Brutalism: preserving the legacy of concrete giants appeared first on AEC Magazine.

]]>
https://aecmag.com/visualisation/reviving-brutalism-preserving-the-legacy-of-concrete-giants/feed/ 0
D5 Render 2.10 introduces real-time path tracing https://aecmag.com/visualisation/d5-render-2-10-introduces-real-time-path-tracing/ https://aecmag.com/visualisation/d5-render-2-10-introduces-real-time-path-tracing/#disqus_thread Thu, 06 Mar 2025 09:14:14 +0000 https://aecmag.com/?p=23168 AEC rendering software also enhances environments and adds several new AI features

The post D5 Render 2.10 introduces real-time path tracing appeared first on AEC Magazine.

]]>
AEC rendering software also enhances environments and adds several new AI features

D5 Render 2.10, the latest release of the AEC-focused real-time rendering software, introduces several new features including real-time path tracing, AI-driven post-processing, a city generator, and night sky simulation.

The new real-time path tracing system delivers global illumination (GI) with ‘superior efficiency’, allowing for ‘cinematic-quality’ rendering in real time. According to the company, ‘instant lighting results’ reduce trial and error while minimising the need for extensive post-processing.

The real-time path tracing system enhances visual fidelity with physically accurate reflections, soft shadows, and indirect lighting with customisable GI precision, reflection depth, samples per pixel (SPP), and noise reduction. An accumulate mode progressively refines render output.


D5 Render 2.10 includes Milky Way Simulation to add atmospheric depth for realistic night scenes

D5 Render’s new Geo Sky Day-Night Cycle is designed to simplify the process of rendering realistic night scenes, enabling ‘seamless transitions’ between daytime and night time lighting.

The software includes customisable Moon & Star Intensity for precise celestial brightness and positioning; Milky Way Simulation to add atmospheric depth for highly realistic night scenes; and custom night settings which allow users to fine-tune moon intensity, altitude, and phases for enhanced realism.


D5 Render 2.10 includes enhanced rain and snow effects

To further enhance realism, the update also introduces improved rain and snow effects. There are more detailed raindrop and snowflake particles, improved puddle and ripple effects for realistic ground interactions, and new water mist simulation that introduces a humid atmosphere for rainy scenes.


City Generator automates the quick and accurate creation of real-world city layouts

Elsewhere, the new City Generator automates the quick and accurate creation of real-world city layouts by integrating OpenStreetMap (OSM) data. There are customisable building heights, materials, and transparency, support for Shapefile (.shp) Import for GIS-based urban planning, and City Model Management Tools to allow for easy modification of roads, buildings, and urban layouts.


AI Inpainting fills missing elements such as sky, vegetation, or water automatically

D5 Render 2.10 also expands its AI-driven functionality with a new tool designed to simplify post-processing, minimising the need for third-party editing software. AI Inpainting fills missing elements such as sky, vegetation, or water automatically; Motion Blur adds natural motion effects, including realistic vehicle taillights, while AI Enhancer improves text and logo sharpness. Furthermore, AI Style Transfer is designed to introduce refined artistic and realistic effects, while ‘AI Make Seamless’ optimises material tiling for smoother textures.


Optimised terrain for faster, more natural landscapes

Other new features include an optimised terrain and scatter workflow for faster, more natural landscapes; animation enhancements for smoother, more intuitive motion; D5 for Teams, which offers enhanced collaboration and cloud integration (OneDrive & SharePoint) for easy project file sharing; and an expanded asset library, including 240+ hotel & resort models, including business characters, vacationers, and lobby decor.


Expanded asset library, including 240+ hotel & resort models

Finally, as reported earlier this year, D5 Render 2.10 also includes support for Nvidia DLSS 4, an AI-powered frame generation for GeForce RTX 50 series GPUs, boosting FPS by up to 4x.


D5 Render DLSS 4
D5 Render DLSS 4

 

The post D5 Render 2.10 introduces real-time path tracing appeared first on AEC Magazine.

]]>
https://aecmag.com/visualisation/d5-render-2-10-introduces-real-time-path-tracing/feed/ 0
SketchUp gets viz and interoperability boost https://aecmag.com/cad/sketchup-2025-boosts-viz-and-interoperability/ https://aecmag.com/cad/sketchup-2025-boosts-viz-and-interoperability/#disqus_thread Wed, 26 Feb 2025 18:35:28 +0000 https://aecmag.com/?p=23149 New features include improved materials and environment lighting, plus better Revit and IFC workflows

The post SketchUp gets viz and interoperability boost appeared first on AEC Magazine.

]]>
New features for SketchUp 2025 include improved materials and environment lighting, plus better Revit and IFC workflows

Trimble SketchUp 2025 features better interoperability with Revit and IFC, and new visualisation capabilities, including photorealistic materials and environment lighting options.

To improve interoperability the 3D modelling software now includes more predictable IFC roundtrips, greater control over which Revit elements and 3D views are imported, and improved support for photorealistic materials when exporting USD and glTF file formats.

“The IFC import feature is incredible,” said Lucas Grolla, architect and owner of Grolla Arquitetura. “It has greatly improved the coordination of different project models with the architectural design. Plus, the new material editor and HDRI styles open up countless possibilities for the visual representation of projects.”


SketchUp 2025 now includes more predictable IFC roundtrips

According to Trimble, the new visualisation features enable designers to apply photorealistic materials, turn on environment lighting and see how they interact in real time without hitting a ‘render’ button or waiting to see changes.

For enhanced environments, 360-degree HDRI or EXR image files now act as a light source, reflecting off photoreal materials. Meanwhile, dynamic materials are said to more accurately convey texture and represent how real-world materials absorb and reflect light, with a view to producing richer, more realistic visuals within SketchUp. Finally, the introduction of ambient occlusion adds visual emphasis to corners and edges, adding perceived depth and realism with or without having materials applied.


“Accessing high-quality, realistic materials directly within the platform has made it so much easier to quickly present designs that resonate with clients,” said Kate Hatherell, director of The Interior Designers Hub. “This feature is a game changer for accelerating workflows, and I’m excited to see how it continues to evolve.”

Elsewhere, LayOut, a tool for creating documents from SketchUp models, has been updated to provide a user experience more consistent with SketchUp. 3D Warehouse, a vast repository of 3D models, also now offers curated photoreal materials, environments and configurable 3D assets in the SketchUp content library.

The post SketchUp gets viz and interoperability boost appeared first on AEC Magazine.

]]>
https://aecmag.com/cad/sketchup-2025-boosts-viz-and-interoperability/feed/ 0
Chaos acquires AI software firm EvolveLab https://aecmag.com/visualisation/chaos-acquires-ai-software-firm-evolvelab/ https://aecmag.com/visualisation/chaos-acquires-ai-software-firm-evolvelab/#disqus_thread Wed, 19 Feb 2025 13:00:07 +0000 https://aecmag.com/?p=23094 Developer of V-Ray and Enscape will gain valuable AI visualisation technology, and new opportunities in AEC design software

The post Chaos acquires AI software firm EvolveLab appeared first on AEC Magazine.

]]>
Developer of V-Ray and Enscape will gain valuable AI visualisation technology and unlock new opportunities in AEC design software

Chaos, a specialist in arch viz software, has acquired EvolveLab, a developer of AI tools for streamlining visualisation, generative design, documentation and interoperability for AEC professionals.

According to Chaos, the acquisition will reinforce its design-to-visualisation workflows, while expanding to include critical tools for BIM automation, AI-driven ideation and computational design.

Founded in 2015, EvolveLab was the first to integrate generative AI technology into architectural modelling software, demonstrating the massive potential of mixing imaginative prompts with 3D geometry. Through its flagship software Veras – which AEC Magazine reviewed back in 2023 – EvolveLab connected this capability to leading BIM tools like SketchUp, Revit, Vectorworks, and others, before expanding into smart documentation and generative design.

Looking ahead, the role of AI in traditional visualisation software will only expand, making the acquisition of EvolveLab a smart strategic move for Chaos. It will be fascinating to see how the two development teams collaborate to integrate their respective technologies.

Read what AEC Magazine thinks

Even before the acquisition, designers relied on the combination of EvolveLab and Chaos tools, using Veras and Enscape to accelerate both design and reviews. In the schematic design phase, this means rapidly generating ideas in Veras before committing the design to BIM where Enscape’s real-time visualisation capabilities pushes the project even further.

“Over a year ago, we began exploring AI tools to speed up our workflows and were excited to discover Veras, a solution specifically designed for AEC that seamlessly integrates with host platforms,” said Hanns-Jochen Weyland of Störmer Murphy and Partners, an award-winning architectural practice based in Hamburg, Germany. “Veras is now our go-to for initial ideation before transitioning to renderings in Enscape. This powerful combination accelerates concept development and ensures reliable outcomes.”

Enscape render
Enscape render enhanced with AI visualisation software Veras

“At Cuningham, we integrate EvolveLab’s Veras and Glyph alongside Chaos’ Enscape to enhance our design process,” said Joseph Bertucci, senior project design technologist of Cuningham, an integrated design firm with offices across the United States. “Using both Enscape and Veras allows us to visualise, iterate, and explore design concepts in real-time while leveraging AI-driven enhancements for rapid refinement. Meanwhile, Glyph has been a game-changer for auto-documentation, enabling us to efficiently generate views and drawing sets, saving valuable time in project setup. These tools collectively streamline our workflows, boosting efficiency, precision, and creativity.”

Chaos and the EvolveLab teams are exploring ways to integrate their products and accelerate their AI roadmaps. EvolveLab products will remain available to customers. The EvolveLab team will join Chaos, with Bill Allen serving as director of product management and EvolveLab chief technology officer Ben Guler as director of software development.

EvolveLab apps include Veras, for AI-powered visualisation; Glyph, for automating and standardising documentation tasks; Morphis, for generating designs in real-time; and Helix, for interoperability between BIM tools.

What AEC Magazine thinks

Like many long-established architectural visualisation software developers, Chaos has undoubtedly sensed growing competition from AI renderers over the past few years.

While tools like EvolveLab’s Veras aren’t yet mature enough or offer the necessary control to replace software like Enscape, they are already capable of handling certain aspects of the arch viz workflow—particularly in the early phases of a project. AI renderers can also enhance final outputs, improving visual quality. In fact, last year, Chaos introduced its own AI Enhancer for Enscape, which uses AI to transform assets like people and vegetation into high-quality, photorealistic visuals—minimising the need for high-poly, resource-intensive models.

Looking ahead, the role of AI in traditional visualisation software will only expand, making the acquisition of EvolveLab a smart strategic move for Chaos. It will be fascinating to see how the two development teams collaborate to integrate their respective technologies.

While EvolveLab’s AI rendering technology and expertise were likely the main drivers behind the acquisition, Chaos has also gained access to powerful tools for BIM automation, AI-driven ideation, and computational design. In our interview with EvolveLab CEO, Bill Allen  last year, he spoke of the company’s ambitious vision, including auto-generated drawings.

With the launch of Enscape Impact last year—bringing building performance analysis into Enscape’s real-time environment—Chaos has already shown its willingness to expand into new areas of AEC technology. Now, with advanced AEC design tools in its portfolio, it will be interesting to see how the company continues to evolve.

The post Chaos acquires AI software firm EvolveLab appeared first on AEC Magazine.

]]>
https://aecmag.com/visualisation/chaos-acquires-ai-software-firm-evolvelab/feed/ 0
Twinmotion 2025.1 adds realism to exterior scenes https://aecmag.com/visualisation/twinmotion-2025-1-brings-realism-to-exterior-scenes/ https://aecmag.com/visualisation/twinmotion-2025-1-brings-realism-to-exterior-scenes/#disqus_thread Wed, 19 Feb 2025 09:00:40 +0000 https://aecmag.com/?p=23084 Real-time viz software includes volumetric clouds, adds control over sky, sun and fog

The post Twinmotion 2025.1 adds realism to exterior scenes appeared first on AEC Magazine.

]]>
Real-time viz software includes volumetric clouds, adds control over sky, sun and fog

Epic Games has unveiled Twinmotion 2025.1, the latest release of its easy-to-use visualisation software, widely used by AEC professionals.

New features include volumetric clouds; enhanced real-time rendering of orthographic views; and ‘configurations’ that enable users to build interactive 3D presentations that showcase different variations of a project.

To enhance the realism of exterior scenes, Twinmotion now has the option to use ‘true volumetric clouds’. Users can author the appearance of clouds by adjusting their altitude, coverage, and distribution, and by fine-tuning their density, colour, puffiness, and other settings.

Volumetric clouds can be affected by wind and will cast shadows. The software includes several presets so users can choose different cloud formations as starting points for their own creations.

Adding further control to exterior scenes, users can now adjust the clarity and colour of the dynamic sky via new settings for turbidity and atmosphere density. In addition, the colour or temperature of the sun (or the directional light in the case of HDRI skies) can also be set, as well as colour, height and density of fog.

Environment settings can be saved as presets, enabling users to apply all the settings in the environment panel in a single click. There are several defaults, including “Golden hour,” “Sunrise glow,” “Rainy day,” and “Mars horizon.”

To showcase different variations of a project to clients or stakeholders, Twinmotion now includes a new ‘Configurations’ feature. Users can instantly switch between the variations when using Twinmotion in Fullscreen mode or when viewing images, panoramas, videos, or sequences in local presentations.

configurations-in-twinmotion-2025.1
Configurations in Twinmotion 2025.1
Real-time Orthographic rendering in Twinmotion 2025.1
Real-time Orthographic rendering in Twinmotion 2025.1

For architects, real-time rendering of orthographic views in Standard and Lumen lighting modes has been ‘significantly enhanced’. There’s now support for shadows, and the black outline around objects has been removed. According to the developers, this enables users to quickly produce high-quality plan and elevation views without using the Path Tracer, as well as facilitating more precise interactive object placement.

Elsewhere, users now have more choice when rendering shadows in real-time rendering mode. New Virtual Shadow Map (VSM) technology is said to produce shadows that are more accurate than standard shadows and that are more consistent with path-traced shadows.

There are also several Camera animation enhancements, a measure tool that enables you to precisely measure the distance between two arbitrary points, and automatic level of detail (LOD) generation to help maintain real-time performance when working with complex imported meshes.

The post Twinmotion 2025.1 adds realism to exterior scenes appeared first on AEC Magazine.

]]>
https://aecmag.com/visualisation/twinmotion-2025-1-brings-realism-to-exterior-scenes/feed/ 0
Artificial horizons: AI in AEC https://aecmag.com/ai/artificial-horizons-ai-in-aec/ https://aecmag.com/ai/artificial-horizons-ai-in-aec/#disqus_thread Wed, 12 Feb 2025 07:56:07 +0000 https://aecmag.com/?p=22407 We ask Greg Schleusner, director of design technology at HOK for his thoughts on the AI opportunity

The post Artificial horizons: AI in AEC appeared first on AEC Magazine.

]]>
In AEC, AI rendering tools have already impressed, but AI model creation has not – so far. Martyn Day spoke with Greg Schleusner, director of design technology at HOK, to get his thoughts on the AI opportunity

One can’t help but be impressed by the current capabilities of many AI tools. Standout examples include Gemini from Google, ChatGPT from OpenAI, Musk’s Grok, Meta AI and now the new Chinese wunderkind, DeepSeek.

Many billions of dollars are being invested in hardware. Development teams around the globe are racing to create an artificial general intelligence, or AGI, to rival (and perhaps someday, surpass) human intelligence.

In the AEC sector, R&D teams within all of the major software vendors are hard at work on identifying uses for AI in this industry. And we’re seeing the emergence of start-ups claiming AI capabilities and hoping to beat the incumbents at their own game.

However, beyond the integration of ChatGPT frontends, or yet another AI renderer, we have yet to feel the promised power of AI in our everyday BIM tools.

The rendering race

The first and most notable application area for AI in the field of AEC has been rendering, with the likes of Midjourney, Stable Diffusion, Dall-E, Adobe Firefly and Sketch2Render all capturing the imaginations of architects.

While the price of admission has been low, challenges have included the need to specify words to describe an image (there is, it seems, a whole art to writing prompting strategies) and then somehow remain in control of its AI generation through subsequent iterations.


Greg Schleusner speaking at AEC Magazine’s NXT BLD conference

In this area, we’ve seen the use of LoRAs (Low Rank Adaptations), which implement trained concepts/styles and can ‘adapt’ to a base Stable Diffusion model, and ControlNet, which empowers precise and structural control to deliver impressive results in the right hands.

For those wishing to dig further, we recommend familiarising yourself with the amazing work of Ismail Seleit and his custom-trained LoRAs combined with ControlNet. For those who’d prefer not to dive so deep into the tech, SketchUp Diffusion, Veras, and AI Visualizer (for Archicad, Allplan and Vectorworks), have helped make AI rendering more consistent and likely to lead to repeatable results for the masses.

However, when it comes to AI ideation, at some point, architects would like to bring this into 3D – and there is no obvious way to do this. This work requires real skill, interpreting a 2D image into a Rhino model or Grasshopper script, as demonstrated by the work of Tim Fu at Studio Tim Fu.

It’s possible that AI could be used to auto-generate a 3D mesh from an AI conceptual image, but this remains a challenge, given the nature of AI image generation. There are some tools out there which are making some progress, by analysing the image to extract depth and spatial information, but the resultant mesh tends to come out as one lump, or as a bunch of meshes, incoherent for use as a BIM model or for downstream use.


Back in 2022, we tried taking 2D photos and AI-generated renderings from Hassan Ragab into 3D using an application called Kaedim. But the results were pretty unusable, not least because at that time Kaedim had not been trained on architectural models and was more aimed at the games sector.

Of course, if you have multiple 2D images of a building, it is possible to recreate a model using photogrammetry and depth mapping.

AI in AEC – text to 3D

It’s possible that the idea of auto-generating models from 2D conceptual AI output will remain a dream. That said, there are now many applications coming online that aim to provide the AI generation of 3D models from text-based input.

The idea here is that you simply describe in words the 3D model you want to create – a chair, a vase, a car – and AI will do the rest. AI algorithms are currently being trained on vast datasets of 3D models, 2D images and material libraries.

While 3D geometry has mainly been expressed through meshes, there have been innovations in modelling geometry with the development of Neural Radiance Fields (NeRFs) and Gaussian splats, which represent colour and light at any point in space, enabling the creation of photorealistic 3D models with greater detail and accuracy.

Today, we are seeing a high number of firms bringing ‘text to 3D’ solutions to market. Adobe Substance 3D Modeler has a plug-in for Photoshop that can perform text-to-3D. Similarly, Autodesk demonstrated similar technology — Project Bernini — at Autodesk University 2024.

However, the AI-generated output of these tools seems to be fairly basic — usually symmetrical objects and more aimed towards creating content for games.

In fact, the bias towards games content generation can be seen in many offerings. These include Tripo, Kaedim, Google DreamFusion  and Luma AI Genie.

There are also several open source alternatives. These include Hunyuan3D-1, Nvidia’s Magic 3D and Edify.

AI in AEC – the Schleusner viewpoint

When AEC Magazine spoke to Greg Schleusner of HOK on the subject of text-to-3D, he highlighted D5 Render, which is now an incredibly popular rendering tool in many AEC firms.

The application comes with an array of AI tools, to create materials, texture maps and atmosphere match from images. It supports AI scaling and has incorporated Meshy’s text-to-AI generator for creating content in-scene.

That means architects could add in simple content, such as chairs, desks, sofas and so on — via simple text input during the arch viz process. The items can be placed in-scene on surfaces with intelligent precision and are easily edited. It’s content on demand, as long as you can describe that content well in text form.


Text-to-3D technology
Text-to-3D technology from Autodesk – Project Bernini

Schleusner said that, from his experimentation, text-to-image or image-tovideo tools are getting better, and will eventually be quite useful — but that can be scary for people working in architecture firms. As an example, he suggested that someone could show a rendering of a chair within a scene, generated via text to AI. But it’s not a real chair, and it can’t be purchased, which might be problematic when it comes to work that will be shown to clients. So, while there is certainly potential in these types of generative tools, mixing fantasy with reality in this way doesn’t come problem-free.

It may be possible to mix the various model generation technologies. As Schleusner put it: “What I’d really like to be able to do is to scan or build a photogrammetric interior using a 360-degree camera for a client and then selectively replace and augment the proposed new interior with new content, perhaps AI-created.”

Gaussian splat technology is getting good enough for this, he continued, while SLAM laser scan data is never dense enough. “However, I can’t put a Gaussian splat model inside Revit. In fact, none of the common design tools support that emerging reality capture technology, beyond scanning. In truth, they barely support meshes well.”


AI in AEC – LLMs and AI agents

At the time of writing, DeepSeek has suddenly appeared like a meteor, seemingly out of nowhere, intent on ruining the business models of ChatGPT, Gemini and other providers of paid-for AI tools.

Schleusner was early into DeepSeek and has experimented with its script and code-writing capabilities, which he described as very impressive.

LLMs, like ChatGPT, can generate Python scripts to perform tasks in minutes, such as creating sample data, training machine learning models, and writing code to interact with 3D data.

Schleusner is finding that AI-generated code can accomplish these tasks relatively quickly and simply, without needing to write all the code from scratch himself.

“While the initial AI-generated code may not be perfect,” he explained, “the ability to further refine and customise the code is still valuable. DeepSeek is able to generate code that performs well, even on large or complex tasks.”

WIth AI, much of the expectation of customers centres on the addition of these new capabilities to existing design products. For instance, in the case of Forma, Autodesk claims the product uses machine learning for real-time analysis of sunlight, daylight, wind and microclimate.

However, if you listen to AI-proactive firms such as Microsoft, executives talk a lot about ‘AI agents’ and ‘operators’, built to assist firms and perform intelligent tasks on their behalf.

Microsoft CEO Satya Nadella is quoted as saying, “Humans and swarms of AI agents will be the next frontier.” Another of his big statements is that, “AI will replace all software and will end software as a service.” If true, this promises to turn the entire software industry on its head.

Today’s software as a service, or SaaS, systems are proprietary databases/silos with hard-coded business logic. In an AI agent world, these boundaries would no longer exist. Instead, firms will run a multitude of agents, all performing business tasks and gathering data from any company database, files, email or website. In effect, if it’s connected, an AI agent can access it.

At the moment, to access certain formatted data, you have to open a specific application and maybe have deep knowledge to perform a range of tasks. An AI agent might transcend these limitations to get the information it needs to make decisions, taking action and achieving business-specific goals.

AI agents could analyse vast amounts of data, such as a building designs, to predict structural integrity, immediately flag up if a BIM component causes a clash, and perhaps eventually generate architectural concepts. They might also be able to streamline project management by automating routine tasks and providing real-time insights for decision-making.

AI agents could analyse vast amounts of data, such as a building designs, to predict structural integrity, immediately flag up if a BIM component causes a clash, and perhaps eventually generate architectural concepts

The main problem is going to be data privacy, as AI agents require access to sensitive information in order to function effectively. Additionally, the transparency of AI decision-making processes remains a critical issue, particularly in high-stakes AEC projects where safety, compliance and accuracy are paramount.

On the subject of AI agents, Schleusner said he has a very positive view of the potential for their application in architecture, especially in the automation of repetitive tasks. During our chat, he demonstrated how a simple AI agent might automate the process of generating something as simple as an expense report, extracting relevant information, both handwritten and printed from receipts.

He has also experimented by creating an AI agent for performing clash detection on two datasets, which contained only XYZ positions of object vertices. Without creating a model, the agent was able to identify if the objects were clashing or not. The files were never opened. This process could be running constantly in the background, as teams submitted components to a BIM model. AI agents could be a game-changer when it comes to simplifying data manipulation and automating repetitive tasks.

Another area where Schleusner feels that AI agents could be impactful is in the creation of customisable workflows, allowing practitioners to define the specific functions and data interactions they need in their business, rather than being limited by pre-built software interfaces and limited configuration workflows.

Most of today’s design and analysis tools have built-in limitations. Schleusner believes that AI agents could offer a more programmatic way to interact with data and automate key processes. As he explained, “There’s a big opportunity to orchestrate specialised agents which could work together, for example, with one agent generating building layouts and another checking for clashes. In our proprietary world with restrictive APIs, AI agents can have direct access and bypass the limits on getting at our data sources.”


Stable Diffusion
Stable Diffusion image courtesy of James Gray

Conclusion

For the foreseeable future, AEC professionals can rest assured that AI, in its current state, is not going to totally replace any key roles — but it will make firms more productive.

The potential for AI to automate design, modelling and documentation is currently overstated, but as the technology matures, it will become a solid assistant. And yes, at some point years hence, AI with hard-coded knowledge will be able to automate some new aspects of design, but I think many of us will be retired before that happens. However, there are benefits to be had now and firms should be experimenting with AI tools.

We are so used to the concept of programmes and applications that it’s kind of hard to digest the notion of AI agents and their impact. Those familiar with scripting are probably also constrained by the notion that the script runs in a single environment.

By contrast, AI agents work like ghosts, moving around connected business systems to gather, analyse, report, collaborate, prioritise, problem-solve and act continuously. The base level is a co-pilot that may work alongside a human performing tasks, all the way up to fully autonomous operation, uncovering data insights from complex systems that humans would have difficulty in identifying.

If the data security issues can be dealt with, firms may well end up with many strategic business AI agents running and performing small and large tasks, taking a lot of the donkey work from extracting value from company data, be that an Excel spreadsheet or a BIM model.

AI agents will be key IP tools for companies and will need management and monitoring. The first hurdle to overcome is realising that the nature of software, applications and data is going to change radically and in the not-too-distant future.


Main image: Stable Diffusion architectural images courtesy of James Gray. Image (left) generated with ModelMakerXL, a custom trained LoRA by Ismail Seleit. Follow Gray on LinkedIn

The post Artificial horizons: AI in AEC appeared first on AEC Magazine.

]]>
https://aecmag.com/ai/artificial-horizons-ai-in-aec/feed/ 0
Workstations for arch viz https://aecmag.com/workstations/workstations-for-arch-viz/ https://aecmag.com/workstations/workstations-for-arch-viz/#disqus_thread Sun, 09 Feb 2025 15:00:20 +0000 https://aecmag.com/?p=22565 We test GPUs and CPUs for arch viz - D5 Render, Twinmotion, Lumion, Chaos Enscape, V-Ray, and Corona

The post Workstations for arch viz appeared first on AEC Magazine.

]]>
What’s the best GPU or CPU for arch viz? Greg Corke tests a variety of processors in six of the most popular tools – D5 Render, Twinmotion, Lumion, Chaos Enscape, Chaos V-Ray, and Chaos Corona

When it comes to arch viz, everyone dreams of a silky-smooth viewport and the ability to render final quality images and videos in seconds. However, such performance often comes with a hefty price tag. Many professionals are left wondering: is the added cost truly justified?

To help answer this question, we put some of the latest workstation hardware through its paces using a variety of popular arch viz tools. Before diving into the detailed benchmark results on the following pages, here are some key considerations to keep in mind.


This article is part of AEC Magazine’s 2025 Workstation Special report

GPU processing

Real-time viz software like Enscape, Lumion, D5 Render, and Twinmotion rely on the GPU to do the heavy lifting. These tools offer instant, high-quality visuals directly in the viewport, while also allowing top-tier images and videos to be rendered in mere seconds or minutes.

The latest releases support hardware ray tracing, a feature built into modern GPUs from Nvidia, AMD and Intel. While ray tracing demands significantly more computational power than traditional rasterisation, it delivers unparalleled realism in lighting and reflections.

GPU performance in these tools is typically evaluated in two ways: Frames Per Second (FPS) and render time. FPS measures viewport interactivity — higher numbers mean smoother navigation and a better user experience — while render time, expressed in seconds, determines how quickly final outputs are generated. Both metrics are crucial, and we’ve used them to benchmark various software in this article.

For your own projects, aim for a minimum of 24–30 FPS for a smooth and interactive viewport experience. Performance gains above this threshold tend to have diminishing returns, although we expect hardcore gamers might disagree. Display resolution is another critical factor. If your GPU struggles to maintain performance, reducing resolution from 4K to FHD can deliver a significant boost.

It’s worth noting that while some arch viz software supports multiple GPUs, this only affects render times rather than viewport performance. Tools like V-Ray, for instance, scale exceptionally well with multiple GPUs, but in order to take advantage you’ll need a workstation with adequate power and sufficient PCIe slots to accommodate the GPUs.

GPU memory

The amount of memory a GPU has is often more critical than its processing power. In some software, running out of GPU memory can cause crashes or significantly slow down performance. This happens because the GPU is forced to borrow system memory from the workstation via the PCIe bus, which is much slower than accessing its onboard memory.

The impact of insufficient GPU memory depends on your workflow. For final renders, it might simply mean waiting longer for images or videos to finish processing. However, in a real-time viewport, running out of memory can make navigation nearly impossible. In extreme cases, we’ve seen frame rates plummet to 1-2 FPS, rendering the scene completely unworkable.

Fortunately, GPU memory and processing power usually scale together. Professional workstation GPUs, such as Nvidia RTX or AMD Radeon Pro, generally offer significantly more memory than their consumer-grade counterparts like Nvidia GeForce or AMD Radeon. This is especially noticeable at the lower end of the market. For example, the Nvidia RTX 2000 Ada, a 70W GPU, is equipped with 16 GB of onboard memory.

For real-time visualisation workflows, we recommend a minimum of 16 GB, though 12 GB can suffice for laptops. Anything less could require compromises, such as simplifying scenes and textures, reducing display resolution, or lowering the quality of exported renders.

CPU processing

CPU rendering was once the standard for most arch viz workflows, but today it often plays second fiddle to GPU rendering. That said, it remains critically important for certain software. Chaos Corona, a specialist tool for arch viz, relies entirely on the CPU for rendering. Meanwhile, Chaos V-Ray gives users the flexibility to choose between CPU and GPU. Some still favour the CPU renderer for its greater control and the ability to harness significantly more memory when paired with the right workstation hardware. For example, while the top-tier Nvidia RTX 6000 Ada Generation GPU comes with an impressive 48 GB of on-board memory, a Threadripper Pro workstation can support up to 1 TB or more of system memory.

CPU renderers scale exceptionally well with core count — the more cores your processor has, the faster your renders. However, as core counts increase, frequencies drop, so doubling the cores won’t necessarily cut render times in half.

Take the 96-core Threadripper Pro 7995WX, for example. It’s a powerhouse that’s the ultimate dream for arch viz specialists. But does it justify its price tag—nearly 20 times that of the 16-core AMD Ryzen 9950X—for rendering performance that’s only 3 to 4 times faster? As arch viz becomes more prevalent across AEC firms, that’s a tough call for many.


Chaos Corona 10

Chaos Corona is a CPU-only renderer designed for arch viz. It scales well with more CPU cores. But the 96-core Threadripper Pro 7995WX, despite having six times the cores of the 16-core AMD Ryzen 9 9950X and achieving an overclocked all-core frequency of 4.87 GHz, delivers only three times the performance.

 

Chaos Corona

Chaos Corona


Chaos V-Ray 6

Chaos V-Ray is a versatile photorealistic renderer, renowned for its realism. It includes both a CPU and GPU renderer. The CPU renderer supports the most features and can handle the largest datasets, as it relies on system memory. Performance scales efficiently with additional cores.

V-Ray GPU works with Nvidia GPUs. It is often faster than the CPU renderer, and can make very effective use of multiple GPUs, with performance scaling extremely well. However, the finite onboard memory can restrict the size of scenes. To address this, V-Ray GPU includes several memory-saving features, such as offloading textures to system memory. It also offers a hybrid mode where both the CPU and GPU work together, optimising performance across both processors.

Vray

Vray

Vray


D5 Render 2.9

D5 Render is a real-time arch viz tool, based on Unreal Engine. Its ray tracing technology is built on DXR, requiring a GPU with dedicated ray-tracing cores from Nvidia, Intel, or AMD.

The software uses Nvidia DLSS, allowing Nvidia GPUs to boost real time performance. Multiple GPUs are not supported.

The benchmark uses 4 GB of GPU memory, so all GPUs are compared on raw performance alone. Real time scores are capped at 60 FPS.

D5 Render

D5 Render


Enscape 3.3

Enscape is a very popular tool for real-time arch viz. It supports hardware ray tracing, and also Nvidia DLSS, but not the latest version.

For testing we used an older version of Enscape (3.3). This had some incompatibility issues with AMD GPUs, so we limited our testing to Nvidia. Enscape 4.2, the latest release, supports AMD.

We focused on real time performance, rather than time to render. The gap between the RTX 5000 Ada and RTX 6000 Ada was not that big. Our dataset uses 11 GB of GPU memory, which caused the software to crash when using the Nvidia RTX A1000 (8GB).

Enscape

Enscape


Lumion Pro 2024

Lumion is a real-time arch viz tool known for its exterior scenes in context with nature.

The software will benefit from a GPU with hardware raytracing, but those with older GPUs can still render with rasterisation.

Our test scene uses 11 GB of GPU memory, which meant the 8 GB GPUs struggled. The Nvidia RTX A1000 slowed down, while the AMD Radeon Pro W7500 & W7600 caused crashes. The high-end AMD GPUs did OK against Nvidia, but slowed down in ray tracing.

Lumion

Lumion


Twinmotion 2024.1.2

Twinmotion from Epic Games is a real-time viz tool powered by Unreal Engine. It includes a DXR path tracer, for accurate lighting and Global Illumination (GI) and will benefit from one or more GPUs with hardware raytracing – AMD or Nvidia.

Our test scene uses 20 GB of GPU memory, massively slowing down the 8 GB GPUs. The 8 GB AMD cards caused the software to crash with the Path Tracer. The high-end AMD GPUs did OK against Nvidia but were well off the pace in path tracing.

Twinmotion

Twinmotion


Nvidia DLSS – using AI to boost performance in real-time

Nvidia DLSS (Deep Learning Super Sampling) is a suite of AI-driven technologies designed to significantly enhance 3D performance (frame rates), in real-time visualisation tools.

Applications including Chaos Enscape, Chaos Vantage and D5 Render, have integrated DLSS to deliver smoother experiences, and to make it possible to navigate larger scenes on the same GPU hardware.

DLSS comprises three distinct technologies, all powered by the Tensor Cores in Nvidia RTX GPUs:

Super Resolution
This boosts performance by using AI to render higher-resolution frames from lower-resolution inputs. For instance, it enables 4K-quality output while the GPU processes frames at FHD resolution, saving core GPU resources without compromising visual fidelity.

DLSS Ray Reconstruction
This enhances image quality by using AI to generate additional pixels for intensive ray-traced scenes.

Frame Generation
This increases performance by using AI to interpolate and generate extra frames. While DLSS 3.0 could generate one additional frame, DLSS 4.0, exclusive to Nvidia’s upcoming Blackwellbased GPUs, can generate up to three frames between traditionally rendered ones.

When these three technologies work together, an astonishing 15 out of every 16 pixels can be AI-generated.

DLSS 4.0 will soon be supported in D5 Render, promising transformative performance gains. Nvidia has demonstrated that it can elevate frame rates from 22 FPS (without DLSS 4.0) to an incredible 87 FPS.


D5 Render DLSS 4
D5 Render DLSS 4

This article is part of AEC Magazine’s 2025 Workstation Special report

➡ Subscribe here

The post Workstations for arch viz appeared first on AEC Magazine.

]]>
https://aecmag.com/workstations/workstations-for-arch-viz/feed/ 0