Skip to content

We asked VALA specialists who attended RoboCon 2025 to pick their favorite presentation and share their notes of it. Here’s a summary of their notes and takeaways, along with some additional thoughts on how these ideas might be applied in real projects.

Sanna – Perfbot: Integrated Performance Analysis of Robot Tests

By Lennart Potthoff

Lennart Potthoff introduced Perfbot, a tool that analyzes test execution times to detect performance regressions in test scripts or the system under test.

Many testers struggle with performance testing, often relying on event-driven monitoring or subjective feedback rather than full-scale regression testing. Perfbot offers a solution by comparing test run times with historical data. If a test suddenly runs much slower, it signals a possible performance issue.

The key advantage is that Perfbot analyzes results after test execution, using Robot API ResultVisitor. This means it doesn’t slow down test runs. While not a replacement for dedicated performance tools, it helps raise awareness of performance issues early.

This approach can be used to continuously monitor performance and make data-driven improvements. A simple first step is storing test results in a database and analyzing trends over time.

Sanna found the idea promising and is considering testing Perfbot on a customer project to gain further insights.

Sanna

Test automation engineer

Sami – Make Automation Green Again: Experiments with AI-Supported Self-Healing

By Many Kasiriha

Many Kasiriha’s talk focused on self-healing tests—a way to reduce the time spent fixing broken locators when software updates cause tests to fail.

His experiment used Large Language Models (LLMs) and AI to detect and fix broken locators in real-time. With new listener enhancements in Robot Framework 7.1, test failures trigger an AI-based fix, allowing tests to continue running. A report is generated with the changes applied.

While innovative, this approach raised concerns. Sami found it confusing, expecting the talk to focus on efficiency and sustainability in test automation. Instead, the self-healing process doubled test run times and required more data and reports.

During the Q&A, someone asked whether self-healed tests could be trusted. The speaker had no clear answer, which left Sami skeptical about its practical use.

Before adopting AI-driven test healing, small experiments should be run to evaluate its reliability and impact on test accuracy.

Sami

Test automation engineer


Meri – Infrastructure as Code: A Superpower for Test Automation

By Nils Balkow-Tychsen

This talk explored how Infrastructure as Code (IaC) can help manage and automate test environments, especially in cloud-based testing.

A common problem in cloud testing is hidden costs—when test environments run longer than necessary, leading to unexpected expenses. Manually managing configurations also increases the risk of lost knowledge when key employees leave.

IaC, using tools like Terraform with Robot Framework, allows to create, store, and review infrastructure configurations as code. This ensures stability, consistency, and efficiency.

IaC also enhances security and compliance by enforcing standard configurations. It integrates well with CI/CD pipelines, reducing manual effort and improving test reliability.

Meri sees IaC as a valuable approach for ensuring long-term test environment stability and cost efficiency.

Meri

Test automation engineer

Jan – Dear AI, Which Tests Should Robot Framework Execute Now?

By Elmar Jürgens

Elmar Jürgens tackled a major challenge in test automation: long feedback loops when running an entire test suite after every code change.

His solution? Run only the most relevant tests.

Instead of executing the full test set, AI-driven techniques like machine learning and LLMs can identify which tests are most likely to catch new bugs. This dramatically reduces test execution time without sacrificing quality.

However, some methods—like defect prediction based on component size—were found to be unreliable.

Jan’s key takeaway: Don’t waste time running unnecessary tests. Use AI to optimize test execution and speed up feedback cycles.

Jan

Test automation engineer

Tommy – Transforming Robot Framework Results for Integrated Reporting

By Adam Mikulka

Adam Mikulka’s talk explored ways to improve test reporting in Robot Framework by modifying results either during execution (with Listeners) or after execution (with prerebot modifiers and ResultVisitors).

Standardized reports increase traceability, making it easier to spot trends, analyze test failures, and improve decision-making.

Tommy, already familiar with Listeners, found prerebot modifiers and ResultVisitors particularly useful. Unlike Listeners, they don’t affect test performance but still provide clear, structured reports.

This approach is especially valuable in large organizations where unified reporting across teams helps improve efficiency.

For those looking to enhance their test reports, integrating these tools into Robot Framework could be a game-changer

Tommy

Test automation engineer


VALA people rocking at RoboCon! Until next year!

Search