Meta records employee keystrokes for AI training
Internal tool captures mouse and typing data in selected apps, workplace telemetry becomes model fuel
Images
Image Credits:Kyle Grillot/Bloomberg / Getty Images
techcrunch.com
Dario Amodei, co-founder and chief executive officer of Anthropi.
techcrunch.com
Mark Zuckerberg, chief executive officer of Meta Platforms Inc.
techcrunch.com
techcrunch.com
Meta is rolling out an internal tool that captures employees’ mouse movements and keystrokes in certain applications to train its AI models, according to Reuters, as summarized by TechCrunch. The company says the data will be used to improve “agents” that can operate computers the way people do—clicking buttons, navigating menus, and completing everyday tasks.
The immediate novelty is not that Meta wants more training data, but where it plans to get it. Consumer text, images, and public web pages are increasingly contested—by lawsuits, licensing demands, and platform restrictions. Employee behavior inside corporate tools is different: it is already logged in many places, it is contractually easier to claim, and it comes with built-in labels such as job role, software stack, and task context. Meta’s pitch is that real examples of how people use software will make its models better at using software.
But collecting “inputs” at the level of keystrokes and cursor paths collapses a boundary that many companies have tried to keep intact: the difference between building products and monitoring workers. Even if Meta limits capture to “certain applications” and says it has safeguards to protect sensitive content, the system still requires persistent instrumentation on employee machines. Once that instrumentation exists, the question becomes less about whether it can be repurposed and more about who has the authority to decide when.
There is also a structural reason this approach is attractive. Training data is a recurring cost, while employee activity is a continuous stream. A company that can turn routine work into model input reduces its dependence on external datasets and on negotiations with publishers, platforms, or data brokers. The tradeoff is internal trust: staff must assume that what they type, how they hesitate, and what they click will be interpreted only as “training signals” and not as performance analytics.
Meta told TechCrunch that the captured data is not used for any other purpose. That assurance is hard to audit from the outside, and difficult for employees to verify from the inside.
For now, the fact pattern is straightforward: Meta wants its AI agents to learn how people operate computers, and it is choosing to learn from its own workforce.