Found a paper [0] that discusses a possible mechanism [1]:
> The manual for Open Sesame! mentions that some neural learning mechanism is used but does not give further explanations [...] (Caglayan et al. 1996), however claim that Open Sesame! makes use of a variation of adaptive resonance theory-2 (ART-2) algorithm of Carpenter and Grossberg.
I've always wondered why this capability (detecting repetitive patterns and offering an auto-complete option to apply it to the whole collection) is not more pervasive.
It's a very good interaction pattern, it would be more and more viable given increasing computing power and better understanding of context thanks to LLM, and would increase productivity and reduce boredom with little intrusive effects.
So why is nobody offering this feature in data-heavy UIs? (Except for Excel's column auto complete, which it's hit or miss but incredibly useful when it works).
In 1989 I wrote a little shell with an embedded neural net that watched what you did and after a while prompted you for a likely next command. Just a toy really but it worked. I used to be into such things back then.
Everything old is new again: I came across a demo for Telescript [1] the other day that would not look out of place in a pitch deck today, save the references to AT&T. https://www.youtube.com/watch?v=wtrs3jtY96k
Found a paper [0] that discusses a possible mechanism [1]:
> The manual for Open Sesame! mentions that some neural learning mechanism is used but does not give further explanations [...] (Caglayan et al. 1996), however claim that Open Sesame! makes use of a variation of adaptive resonance theory-2 (ART-2) algorithm of Carpenter and Grossberg.
[0] https://api.digie.ai/publications/Hoyle-paper-review.pdf
[1] https://en.wikipedia.org/wiki/Adaptive_resonance_theory
I've always wondered why this capability (detecting repetitive patterns and offering an auto-complete option to apply it to the whole collection) is not more pervasive.
It's a very good interaction pattern, it would be more and more viable given increasing computing power and better understanding of context thanks to LLM, and would increase productivity and reduce boredom with little intrusive effects.
So why is nobody offering this feature in data-heavy UIs? (Except for Excel's column auto complete, which it's hit or miss but incredibly useful when it works).
In 1989 I wrote a little shell with an embedded neural net that watched what you did and after a while prompted you for a likely next command. Just a toy really but it worked. I used to be into such things back then.
I wonder what someone with your experience looks at how modern ai technology is being developed. Fascination, frustrations, or both?
It is worth mentioning Open Sesame! growth into a leader [0] in warfighter and human-centered intelligent systems.
[0] https://cra.com/company/
Everything old is new again: I came across a demo for Telescript [1] the other day that would not look out of place in a pitch deck today, save the references to AT&T. https://www.youtube.com/watch?v=wtrs3jtY96k
[1] http://www.datarover.com/Telescript/Documentation/TRM/chapte...