FutureFive Australia - Consumer technology news from the future
Roscommon Systems adds video narration to LIMA screen reader

Roscommon Systems adds video narration to LIMA screen reader

Mon, 4th May 2026 (Today)
Mark Tarre
MARK TARRE News Chief

Roscommon Systems has launched a video narration feature for its LIMA screen reader, aimed at improving YouTube access for low-vision users.

The Brisbane startup says the update allows its AI-based software to describe visual elements in online video, including on-screen text and human gestures, without requiring users to switch to an external tool.

LIMA, short for Low-vision Intelligent Machine Assistant, was introduced earlier this year as a screen reader for blind and vision-impaired users. The software is designed to let users operate a computer through voice commands rather than a mouse or keyboard.

The latest update responds to the limits of conventional screen readers on visual platforms such as YouTube. Those tools often focus on spoken dialogue and may miss non-verbal visual information that helps explain what is happening in a video.

User feedback

Callum Ginty, chief executive officer and co-founder of Roscommon Systems, said user feedback shaped the new feature. "We kept hearing from blind and vision-impaired users that they felt left out of the social media video experience," he said. "That feedback drove everything. We set out to build something that works across all platforms and with any browser, so users never have to leave the page they're already on."

The narration system adapts to a video's complexity and manages playback by pausing and resuming at points intended to make descriptions easier to follow. It is also integrated directly with YouTube.

The update builds on functions already in LIMA, including image recognition to describe page layouts, interpret text embedded in images and deliver spoken output designed for longer listening sessions.

Accessibility gap

The launch highlights a broader issue for blind and vision-impaired people using video-heavy online services. Social media and streaming platforms rely heavily on visual cues, from facial expressions to actions and scene changes, and standard assistive tools do not always capture them.

Roscommon Systems pointed to research showing continued barriers for vision-impaired users across websites and multimedia platforms, linking them to frustration, lower productivity and reduced participation in digital spaces.

Early users described the feature positively. "This is a significant step forward in breaking down barriers to information access for vision-impaired people," said test user Jennifer Parry. "It shows what is truly possible."

Another user, Kushal Solanki, said the feature changed how he could engage with certain video material. "Being able to get detailed descriptions of a video featuring my guru was truly a joyous moment for me," he said. "LIMA was able to capture the minor details within the video."

He added: "Before using the descriptive video service, it was pretty difficult to access YouTube videos that don't involve talking. If there are visual elements, someone would have to describe the video for me."

Wider use

Ginty said the feature marks a shift in how assistive software can handle different types of visual media without relying on separate websites. "Gone are the days of copying links into external websites to generate narrations," he said.

He said the system adjusts the amount of narration to match what is on screen. "For a podcast video where it's just two hosts talking on a plain background, LIMA is intelligent enough to know that such a video doesn't need very much narration. Whereas a movie action scene will have much more narration as this type of media is more visually heavy."

He added: "AI's benefits are real and are making a meaningful difference in the daily lives of blind and vision-impaired people."

The descriptive video service is available to users worldwide.