For telecenter M&E, what should be tracked and measured?

There are a series of interesting conversations occurring at the Telecenter Europe Summit (#TES10) around the importance of community technology programs to advance the EU e-inclusion goals. There is also a sense that these centers need to do a better job of tracking and measuring their work, both to make their case to funders and to improve their programs via sharing lessons, making best practices visible, etc.

The conversation interests me because of the challenge of identifying the factors that matter. (Especially, due to our a priori interest in the contribution of technology. Everyone in the ICTD field is “invested” in its contribution.) This relates to causation. From Richard Taylor:

Most people, that is, think of the cause of some change as some one condition that is conspicuous, novel, or, most likely, within someone’s control. In the illustration we have been using, for example, the friction on the match would ordinarily be thought of as “the cause” of its igniting, without regard to its dryness, its chemical composition, and so on. But the reason for this, obviously, is that these other conditions are taken for granted. They are not mentioned, not because they are thought to have no causal connection with the match’s igniting, but because they are presupposed.

If standardized data are to be collected across highly varied sites, how can we be sure we’re capturing the “correct” variables? What is at risk of being taken for granted?

This problem is accentuated as the needs of the user populations in question grow. Fewer needs, means that less is taken for granted. So, if the goal is employability by way of technology training. It would be fantastic to assume the existence of: a functioning labor market, available jobs, the “right kind” of jobs, employers that value technology skills, individual motivation, job search and placement infrastructure, etc. If you cannot assume these (and many other conditions) then isolating the contribution of technology access and training is unlikely to “cause” employment.

At TASCHA, this problem is particularly acute because we have been working with Microsoft Community Affairs in highly varied settings worldwide. Community Affairs grantmaking is impressive (imo) because they are select excellent partners (on-balance) that serve some of the most difficult populations in the world. (What combination of services is required for a survivor of human trafficking, or a laid off miner in rural Romania, or an ex-guerilla dealing with PTSD and constant threat of violence in Colombia to find employment?) They have funded excellent organizations that are taking on hard cases. So how should the efficacy of employability programs be measured if “employment” is not actually achieved?

Akhtar has been blogging about these issues and offers this advice:

I want to close by saying that both of these programs represent one of our fundamental principles – stay local. I don’t mean to say there aren’t best practices that apply broadly or opportunities for scale at the regional or global level, but you will never be successful if programs aren’t appropriate for the specific community. Through our staff and partners around the world we are able to identify and support programs that remain true to this principle.

If the advice is to stay local, how can we capture general (global? regional?) indicators that do not misrepresent key local catalysts? When we think about these issues, what are we taking for granted?

How can we devise socio-technical M&E systems that either:

  1. Minimize the “taking for granted,”
  2. Avoid penalizing organizations that contribute, should a socio-technical M&E system take their critical success factors for granted,

People love to “count” things. (I’m currently at a coffee shop watching a child count the number of customers. He’s having particular difficulty as people come and go. He just ordered one person not to leave because he’s “still counting.” Awesome.) But how can we make sure that our love of counting does not misrepresent the set of issues that wrap around and complement technology training?