Background
Microsoft launched the idea of a Mircrosoft productivity score to help organizations understand the work patterns of their employees, as well as their influence. They then retreated after receiving feedback about user privacy concerns, as well as the issues with identifying influence.
Analysis
The positioning and buy-in create perceived problems with data about employees in the enterprise, not the use of the data. If Microsoft had launched the idea of a Microsoft productivity score as an end-user tool for continuous improvement, then they may have received a better reception. Anything marketed in a way that screams (or even whispers) big brother, will likely find push-back. Data that shares insights with individuals, perhaps even makes non-intrusive suggestions, can be seen as useful. Microsoft’s productivity score was not a bad idea, but it was an idea that required more subtle marketing.
But perceptions of the misuse of personal data also arise from the broken social contracts with employees and partners. Organizations need to negotiate permission to learn from their employees, not establish black-and-white positions that ban them from data access, even as governments and other bodies seek to limit access to personal data. That people own their data cannot be disputed, but ownership within a business may not be absolute. Shared ownership suggests the need to find common ground on data use, and that means negotiating social contracts recognize privacy, shared interest, and value.
In a world where fabricated evidence leads news cycles, evidence-based management needs to find a role. Is it better, for instance, to say that a person costs the company a significant amount of money because they regularly cancel meetings at the last minute that their peers prepare for, or to take opinions from those peers about the disruptions and leave it at that. It is possible for Microsoft 365 to estimate the amount of work a team puts into a canceled meeting. Would the evidence being presented to an individual about the cost of erratic meeting management not be a valuable learning tool, and yes, even a more valuable course correction input than hearsay from peers?
Useful intrusion is not only welcomed, but it may also prove valuable enough to invest in. As I type this, Grammarly watches my every keystroke in order to assert its mostly correct suggestions for my writing. And every week, I receive a report about how my writing is different, or better, or worse than that of other Grammarly subscribers. I pay for this intrusion. I willingly give away my data about my most precious core competency so that my readers may enjoy and less typo-ridden writing experience.
So remove management from the equation, and share with users the system’s perceptions of their productivity with apps, and with humans networks, and you likely deliver a tool that could be useful for self-growth. Even if people don’t do anything about what they learn explicitly, the awareness of the information may drive subtle changes that improve how they work.
One way to repair the broken work agreements is to make this data part of the opt-in for work. If you want to work here, you agree to have the system track you and provide feedback. Secondly, at any point, people can opt-in to have a manager, a coach, or a mentor, gain access to data–and just as easily revoke it. But should they grant permission, they can use that data inform what advice they receive.
Only by gathering this data and using it in practice can companies like Microsoft learn what works and what doesn’t and how to build meaningful models beyond what they can imagine today. They will need to be transparent that their measurement systems are primitive and that they will get stuff wrong for a long time–but that the learning journey they share will incrementally improve and continue to provide value. (And they will need to constantly listen to those on the shared journey to ensure that it does continue to provide value).
What work scoring systems will get wrong for a long time
Any initial implementation of a work scoring system will get a lot wrong when first deployed. A lot. A learning curve may be steep, but that doesn’t mean the learning isn’t valuable.
Information and knowledge work is complex and collaborative. Focusing on the individual is likely insufficient. Scoring requires context. But without a focus on the individual, any aggregate analysis loses its ability to influence change, except in the abstract (hey, it looks like we have few people doing this thing not as well as everybody else).
As I point out in the post How to Define Quality of Service (QoS) for Meetings, the fallback position, the one Microsoft took, and the one Cisco is also taking with their Webex metrics work, looks self-servingly at technology adoption and infrastructure availability. These are easy to measure, but they fall into the category of counting things. They may prove useful to the vendor during a renewal discussion, but they do very little to improve the non-obvious issues related to meeting quality or individual productivity.
Perhaps the biggest issue with measuring productivity remains the oversimplification of work models. Because people act as cushions and spacers for poor work design it becomes nearly impossible for systems to actually understand anything but the most basic measurements of productivity.
Complication also comes from overlapping investments in collaboration technology. Microsoft’s ability to build models and report on impact, at any level, requires them to be the majority technology provider within the workspace. In mixed technology environments, the lack of integration that causes manual handoffs and duplicate postings will make creating a cohesive view of work more daunting. Over the long term, as meaningful models emerge, the underlying technology can be integrated across systems like any other interface…but as the learning takes place, disparate working environments will make modeling work more difficult, even impossible in the most fragmented cases.
Other factors include recognizing delegation and gaming the system.
Microsoft Productivity Score: Gaming the system
Any system of measurement can be gamed. People will learn what their managers think is important. They will then work to ensure that their work appears to align well with what the managers think is important. Even if it isn’t really important. That reality sits at the core of my book Management by Design, where I argue that all work must be aligned with strategy and that all work experiences should be designed for balance to avoid over-emphasis or disproportionate attention to particular measures.
To avoid people gaming the system, develop models that reflect the complexity of work. In sales, is closed deals in one quarter an over emphasis on closure over relationships. Does a more relationship-focused sales person do better over the long term, with bigger, more lasting deals? What are the activities involved in the sales cycle, and how is each activity measured. Then ideally, some instrument can be inserted that will allow technology to help monitor performance–informing a salesperson that her or she is spending too much time on research and not enough time sending out e-mails. Would giving them a personal view of how they manage their time compared to an average of their peers really be violating their privacy or anyone? I doubt it–if the expectation is set that these kinds of tools are intended for personal growth.
It is easy to game simple systems, much harder to game more complex ones. But modeling the complexity of the work environment, and reflecting that back in meaningful ways, individuals will be less inclined to game the system, and less capable of doing so.
The best way to avoid system gaming is to put the information worker at the center of the system, so that any gaming they do clearly proves a detriment to them obtaining value from the experience.
People are clever, and if they think their compensation ties to a certain metric, they will play that metric. Simply productivity measurements integrated into management feedback will produce the same gaming behavior. The future of work remains a complex interplay or process, relationships, and tools–a system too complex to game except when poor managers overemphasize components. By offering windows into the complexity, collaboration vendors can offer useful insights that can help people learn about how they work, improve their work product, and create better outcomes. But that means tackling the hard problems of modeling complex work, and you have to be in that game to make progress.
Note: I fall prey to the system gaming as I attempt to turn my Yoast SEO green. That involves the implementation of poor writing in places as I squeeze keywords into places they don’t naturally fit.
Why Microsoft’s retreat from the individual Microsoft Productivity Score was wrong
Microsoft should have repositioned, not retreated (and they should have considered this before for the launch) from the Microsoft productivity score. If they brought out their productivity scores as personal learning tools, they could have continued learning about how to build better models, how to identify the gaming of measurements, and how to create a meaningful context.
Too many analytics programs focus on technology buyers. Value propositions aim at renewals and air cover for those making large software and services investments. Productivity, serendipity, and innovation don’t arise from counting up-time, technology adoption, or engagement during meetings–they arise from inspired people working hard, building and leveraging relationships, and imagining new ideas. If software instruments can help individuals do those things better, and report on how well software facilitated, encouraged, and added value to those efforts, then the dialog about the balance between privacy and value creation is a worthy one to have. By retreating, Microsoft abdicates leadership in an area where very few companies have permission and credibility to learn at scale.
Software has a role beyond shaving seconds off the development of a PowerPoint presentation. As long as Microsoft sees software as just a tool it will not elevate the possibilities for a deeper understanding. Even the discussions of ecosystems circle around technology integrations, not the social and economic relationships among those using the tools. Microsoft needs to step up to a leadership role that helps organizations, and those who work for them, understand what software can do to discover new levels of value based not on points in time, but across the large swaths of time and process through which work flows.
Microsoft can and should learn about how people work at a very deep level, and it should provide value back to its customers by sharing that learning so that it can maintain the credibility and permission it has fought hard to earn.
Microsoft will never stop thinking about the next competitive displacement sale, nurturing loyalty to gain renewals, or upselling on current accounts. It needs to consider the vast footprint it holds and the value of learning more from its customers than what they like or don’t about a user interface element, an API, or a product feature. Microsoft can and should learn about how people work at a very deep level. And it should provide value back to its customers by sharing that learning to maintain the credibility and permission it has fought hard to earn.
More more serious insights on other collaboration topics like the Microsoft Productivity Score click here.
Over photo by Andreas Klassen on Unsplash.
Leave a Reply