limitations-of-a-models-in-time-and-calendar-recognition-study-findings

New AI Study Reveals Clocks and Calendars Still a Challenge for Machines

So, here’s the deal: artificial intelligence (AI) might be able to do a lot of fancy things like writing code and passing exams, but when it comes to reading analog clocks and figuring out what day a date falls on, well, let’s just say it’s not exactly winning any awards. A recent study presented at the 2025 International Conference on Learning Representations (ICLR) and published on the preprint server arXiv uncovered some pretty embarrassing shortcomings in AI’s timekeeping abilities.

Clocks and calendars are things that most of us humans can handle pretty easily, right? I mean, we learn how to tell time and use calendars from a young age. But apparently, AI is struggling big time in these areas. According to Rohit Saxena, the lead author of the study from the University of Edinburgh, this is a pretty big deal because if we want AI to be useful in real-world applications that involve time, like scheduling and automation, it needs to get its act together.

The researchers fed a bunch of clock and calendar images into various large language models (LLMs) that can process both visual and textual information. These models, like Meta’s Llama 3.2-Vision and Google’s Gemini 2.0, were supposed to be able to figure out the time from a clock image or the day of the week for a specific date. But guess what? They failed miserably. Like, more than half the time. It turns out that AI struggles with things like spatial reasoning when it comes to clocks and doesn’t really do so hot with arithmetic for calendars either.

Not really sure why this matters, but the models had a hard time with simple tasks like identifying overlapping clock hands or calculating the day of the year. I mean, come on, isn’t this basic stuff? Apparently not for AI. Even though arithmetic is supposed to be a piece of cake for computers, these large language models are having a tough time with it because they rely more on patterns in their training data than actual math skills.

In the end, this study just goes to show that while AI can do a lot of cool things, it still has a long way to go when it comes to understanding things the way humans do. And maybe it’s just me, but it seems like we still need to keep a close eye on AI and not just blindly trust everything it spits out. After all, when it comes to tasks that involve both perception and reasoning, maybe having a human in the loop isn’t such a bad idea.

So, in conclusion, AI might be powerful, but it’s not infallible. And when it comes to tasks that require a mix of logic and spatial reasoning, we might want to think twice before fully relying on machines. Just a little food for thought.