Executive Summary

Samasource’s mission is to increase income for poor and unemployed youth in developing countries. The theory behind Samasource’s program is that employers in low-income countries have trouble discerning the potential and latent abilities of young job seekers, who in turn are not able to acquire marketable skills and job experience.

To address this challenge, Samasource trains young workers in hard and soft skills and employs workers (referred to as “agents”) who meet certain impact criteria to conduct digital microwork. Samasource partners with business process outsourcing delivery centers and operates its own delivery center to employ these agents. Samasource secures contracts with firms in developed countries, who pay Samasource for the digital microwork. Samasource, a nonprofit, also receives charitable contributions to support its operations and provide training and additional services for agents.

The objective of this program is to provide young job seekers who do not have the economic, educational or social background to secure jobs with the marketable skills and job experience necessary to do so.

Samasource monitors delivery through process metrics, most importantly the total number of active agents at delivery centers. Samasource monitoring data show that in the first three quarters of 2016, there were 955 active agents on average. Samasource’s primary outcome is post-Samasource total annual earnings. This primary outcome could lead to additional positive downstream outcomes, including increased consumption, improved health and additional education.

ImpactMatters did not consider the wages paid during employment at Samasource as an outcome, as these wages are not considerably different from fair market wages paid in exchange for services provided by the agents. Wages are considered compensation for agents’ productive time and work output, and therefore not analyzed as impact in this audit. This differs from Samasource’s own analysis of its impact, which does encompass wages earned at Samasource.

Samasource operates delivery centers or partners with delivery centers in Haiti, India, Kenya and Uganda. In addition, Samasource operates a program, Samaschool USA, in the United States, which aims to train workers to participate in the gig economy.

Samasource is at the validation stage. Samasource was founded in 2008 and has a proven business model and enterprise-wide systems for business operations. Although Samasource has successfully replicated its model, it continues to make fundamental changes to that model.

Samasource has conducted an internal evaluation of its impact, but this evaluation only produced low-quality evidence for Samasource’s impact. The internal evaluation compared the wages of agents upon entry to the program with wages three years after leaving the program. This evaluation found a 271% increase ($3,540 vs. $954) in annual income after three years for Samasource agents, compared to an expected 33% ($1,265 vs. $954) increase in income if those agents did not participate in Samasource. However, there are two significant limitations of this analysis. First, of the full sample selected to perform the analysis, data was only collected on 32%. Attrition from the sample is likely correlated in outcomes, which would bias upward the observed finding. Second, comparing agents before and after the program provides a very weak estimate of counterfactual impact. A number of factors not related to the Samasource program – such as increased experience or selection for higher aptitude workers – could be responsible for the observed wage changes.

Samasource is in the planning stages of a randomized controlled trial with a third-party evaluator, Innovations for Poverty Action.1 The trial is expected to conclude in December 2018 and we anticipate it will be of high quality. One rigorous, counterfactual evaluation of a similar program has been conducted in Liberia. This study is moderately applicable to Samasource and shows large increases in wages. We will discuss potentially important differences between this study and the Samasource program below. We will also discuss four additional studies that have low applicability to Samasource.

Using baseline and endline survey data from Samasource, we predict the Samasource program increases earnings by $1,150 after three years. To correct for some of the issues with Samasource’s survey data, we conservatively assume that participants who responded to the baseline but not the endline survey did not experience any change in earnings. The available empirical evidence from Samasource only extends for three years, and so that is the period of benefits we use. Samasource spends $1,800 in total cost per participant in the program, for a total Cost of Impact of $0.64.

Samasource collects high-quality monitoring data, which generates useful knowledge for Samasource itself and external stakeholders. The principal monitoring systems reviewed in this impact audit are targeting systems, tracking of Impact Sourcing activities at delivery centers and of Samasource Training activities at training centers, pre- and post-program surveys and payroll tracking. This audit assessed how well Samasource collects five types of monitoring data: what the nonprofit does (activities), who it reaches (targeting), how well those in the program participate (engagement), what participants and other stakeholders think about the program (feedback) and the results of the intervention for participants and other beneficiaries (outcomes).

Activity, targeting and engagement data are collected with a credible methodology, as the information is trustworthy to personnel and outsiders alike. Samasource could improve the credibility of its feedback data by systemizing its collection and of its outcomes data by improving its response rate. All data are actionable, meaning the data are circulated to appropriate personnel throughout the organization in a timely manner, enabling managers and executives to understand and respond to current problems. Minor improvements could be made to the credibility of feedback data, such as ensuring Samasource workers have periodic and inclusive opportunities to provide their feedback to their managers. All data collected are responsible and all data but outcomes data are transportable in that they are well-linked to Samasource’s theory of change and are shared appropriately and transparently. Samasource could improve the transportability of its outcomes data by collecting only those that actually inform management decisions. However, Samasource has excellent monitoring systems overall, and the issues identified here are minor.

Samasource iterates its model systematically and periodically on the basis of high-quality data. This means that iterations are subjected to testing and are adopted based on either a counterfactual test of impact, strong effect sizes, or both. Samasource’s iterations are also adopted systematically, with recognizable components of the Plan-Do-Check-Act cycle. Lastly, Samasource iterates periodically in that it sources, considers and tests iterations at frequent regular intervals. An important feature of Samasource’s Learning and Iteration is that it has demonstrated willingness to make difficult strategic decisions, such as closing down an unsuccessful program in order to substantially reimagine it.


1 ImpactMatters is currently being incubated by Innovations for Poverty Action.

Nonprofit Comment

In this section, the nonprofit is given the opportunity to respond to the impact audit. This statement has not been edited by ImpactMatters.

Samasource would like to thank the ImpactMatters team for their rigorous research and analysis. This audit report is the most objective and exhaustive assessment of our organization's mission, operations and commitment to outcomes in our history.

As we are committed to the highest standards of impact evaluation, we are honored to receive the highest possible three star ratings in the Quality of Monitoring Systems and Learning & Iterations categories with a total achievement of eight out of nine stars.

We also understand to achieve the top three star ratings in the Quality of Impact Evidence category we require further objective evidence.  We’ve been aware of this shortcoming for several years, and can now report that we have initiated a Randomized Control Trial (in affiliation with Innovations for Poverty Action and Massachusetts Institute for Technology) in January of 2017. We look forward to sharing the results throughout the study to enhance the quality of our objective impact evidence, as well as contribute to the body of academic development literature around effective and permanent interventions.

We are also grateful for ImpactMatters’ rigorous and exhaustive analysis to our Cost of Impact, particularly their recognition that our social enterprise model is unique towards driving the majority of its long term financial viability through earned revenue which significantly magnifies the impact of donor contributions.

As a result of our progress in our model we are thrilled to report that we’ve achieved our ultimate goal (earlier than planned) of full operational sustainability for impact sourcing activities through earned revenue in 2016. As a result, our Donor’s Cost of Net Impact (DCNI) measured by ImpactMatters from 2013-2015 at 0.63 has now moved to well above 100x in 2016.  We are now in a position where all future donor contributions both significantly enhance each beneficiary's life in terms of additional life skills training, livelihood benefits, current and future income, and also allow our organization to tackle bringing our intervention into the most remote and impoverished communities around the world. 

Our organization has found immense value in participating in the audit process and the report findings. We are especially humbled to find our strategic priorities align closely with the report recommendations and look forward to a continued partnership with the ImpactMatters team to monitor our progress in our mission forward.

Nonprofit Program Description

In this section, ImpactMatters summarizes the essence of the nonprofit’s mission and constructs a theory of change for the nonprofit that describes the problem, the nonprofit’s intervention and the appropriate process metrics and outcome metrics for tracking success.


To increase income for poor and unemployed youth in developing countries.

Theory of Change


Samasource channels small digital tasks, called microwork, to individuals in developing countries. As a business process outsourcer, Samasource is competing with for-profit firms that provide similar services, often relying on workers in developing countries. For donors, Samasource is a sensible investment of philanthropic dollars only if it generates greater impact than its for-profit competitors.

In low-income countries, labor markets are often characterized by mass youth unemployment and poor education. In these labor markets, employers face an information problem, as they have trouble discerning the potential of applicants on the basis of credentials and interviews alone.1 The Samasource theory of change rests on this assumption that human capital markets are incomplete, leaving impoverished individuals without the necessary skills to acquire and maintain employment.

Samasource aims to address this problem by providing low-income individuals in developing countries training and employment conducting microtasks. Samasource aims to reduce transaction costs and information barriers in labor markets that otherwise would prevent such individuals from being employed to provide services to firms in developed countries. It is important to consider that the subsidy to target such individuals comes from three sources: (a) direct subsidy from Samasource donors, (b) increased market for microtasking (which also implies more efficient markets in other sectors entirely, i.e. the firms that are hiring Samasource are able to lower price or offer products and services not otherwise possible) and (c) reduced employment for other microtask employees from competitors of Samasource.

The benefits to Samasource workers (referred to as “agents”) include both short-run and long-run benefits. The short-run benefit is the immediate increase in income and thus consumption. In this impact audit, we do not count this benefit as we consider the wages paid by Samasource to be neither a transfer nor a subsidy to that agent. They are simply wages paid to the agent for work she or he provided. This is discussed further in Outcome Metrics.

The long-run impact, which is counted in this impact audit, comes primarily through one channel: by addressing the underlying information failure in labor market skills (i.e. by providing training and skills development), former agents are more able to acquire positions elsewhere and therefore earn greater wages, leading to downstream benefits such as increased consumption and improved health and education.

Beyond improving job opportunities for the average agent in developing countries, Samasource explicitly targets those who are the poorest, and therefore likely the least employable. Underemployment is not evenly distributed through the population. The poor have fewer opportunities to buy access to formal sector jobs and job training for the formal sector. Those with weaker network ties to firms that offer good jobs to new job seekers will also systematically discover fewer opportunities and get fewer offers of employment.

Particularly among those with sparse connections and limited social capital, the high costs of a job search result in longer job searches, longer periods of underemployment and a rational decision to accept below market wages.


Samasource provides poor unemployed youth with formal sector, digital microwork jobs. Workers receive support services, training, experience and pay that meets a wage standard.


Samasource offers work and job training to individuals on the basis of demographics, wages, education and employment status. Samasource surveys prospective agents to learn whether they meet targeting criteria. Samasource hires both impact and non-impact agents, where the former are those who fully satisfy targeting criteria and the latter are those who are above targeting cut-offs. However, Samasource sets explicit targets for the percentage of impact agents hired.

Applicants are assigned an impact score on the basis of their responses to a survey questionnaire that evaluates socioeconomic status. If the impact score is above a threshold, the applicant is designated an impact agent; borderline cases are subjected to manual review.

The impact criteria that make up the score and the exclusion criteria vary somewhat by location, but generally are designed to capture a similar population in terms of need and access to opportunity. Samasource constructs a score based on prior weekly earnings, whether the agent was unemployed or employed in formal or informal work, and the education background of the agent. In Kenya, geographic targeting (meaning whether an applicant’s neighborhood of residence within Nairobi is an informal settlement, low-income, middle-income or high-income) is a component of impact scoring.

In addition, Samasource uses specific exclusion criteria, segmented by education level. For those who are currently attending school, if their home location is not in a designated area of need, they are designated non-impact.

For those with a high school certificate, Samasource designates them as non-impact if they are in the formal sector and above the benchmark weekly pay. For those with a college or master’s degree, Samasource designates them as non-impact if they are in the formal sector or above the benchmark weekly pay.

In addition, Samasource looks at gender for all hires, with an objective of providing women with an equal chance of hire.

Partner delivery centers provide services with specific ratios of impact agents to non-impact agents on the project according to service-level agreements.

Impact Sourcing

Samasource sources work from leading global companies to be conducted in its delivery centers. Projects are centrally managed by Samasource and broken into microtasks that can be completed in parallel, and then allocated to delivery centers through SamaHub. Samasource contracts with delivery centers to hire a proportion of agents that match Samasource targeting criteria, who are then designated impact agents. Impact agents acquire skills through training and on-the-job learning. In addition to contracting with partner delivery centers, Samasource also operates its own delivery center in Nairobi, Kenya, the Samasource Delivery Center (SamaDC).

Samasource's main activities in Impact Sourcing are: (1) partnering with delivery centers; (2) targeting agents; (3) hiring agents; (4) basic training, project-based training and other ongoing training for agents; (5) winning client contracts, scoping projects, breaking down workflows, coordinating with DCs; (6) paying agents a living wage (and benefits such as paid time off at SamaDC and select partner delivery centers); and (7) “impact programming” and other additional support services.

Samasource Training

Samasource operates a training program for prospective agents, offering three routes to employment. First, Samasource hires graduates of its trainings directly in delivery centers. Second, graduates are given preferential opportunities to be hired by partner delivery centers. Third, graduates may use online courses and boot camps to become self-employed digital microworkers.

Samasource's main activities in Samasource Training are: (8) partnering with training centers; (9) targeting trainees; (10) enrolling trainees; (11) providing ten-day intensive training; and (12) placing graduates into Samasource or with employment partners. In the future, Samasource will also provide support for those seeking employment in online marketplaces.


Samasource invests donor resources in the development of information systems, infrastructure, oversight and client relationships. Samasource has skilled professionals and proprietary SamaHub information technology that facilitate collaboration with leading global companies. The digital microwork provided by delivery centers relies on repetitive human-intelligence tasks that often work in tandem with artificial intelligence and machine learning. A prototypical example of digital microwork is to have people code files with tags and other descriptors that will be used to train computer algorithms. Samasource works with the client to prepare large data files for distributed, parallel, human coding; to manage the project; and to synthesize the results.

Samasource deploys global information technology networks to facilitate collaboration between executives, sales teams, project managers and the delivery centers. Samasource aims to increase the number of delivery centers under its direct control in order to demonstrate best-in-class impact and operational performance, while continuing to scale through partnerships with third-party delivery centers. Samasource invests resources in the facilities and personnel of SamaDC, currently the only Samasource-owned and operated delivery center.

Samasource also invests funds in temporary support services to agents, such as bicycles for Samasource agents to lower the cost of commuting to work and a full-time case worker to assist agents with social issues exacerbated by poverty. Samasource provides agents with impact programming: training and mentorship sessions designed to improve financial literacy, nutrition, health, leadership and career development.


In order for Samasource’s service model to be effective, a number of assumptions must be met. Assumptions describe inputs provided to the intervention by stakeholders other than the project’s own personnel, funds and operations. There are a nearly endless number of assumptions in any organization about the quality of employees and operations; and here we specifically focus on things other stakeholders provide to the project.

  1. Sufficient training: Samasource typically requires agents to be ready to work at the end of a two-week training program. This entails targeting a population with sufficient literacy and numeracy to complete a rapid training in computer literacy, teamwork and accountability. This intervention cannot be extended to populations with weak English, poor literacy or poor numeracy. The same principle applies to Samasource Training trainees, who may not reap adequate benefits from the training course without basic literacy and numeracy skills.
  2. Willingness to hire impact agents: Samasource assumes that clients are willing to outsource work to microworkers from a poorer background or with less education than the microworkers they would typically be outsourcing to on the open market.
  3. Partner delivery center and training center capacity: Samasource contracts with partner delivery centers that conduct tasks using Samasource’s project managers and task systems. This model rests on the assumption that partner delivery centers are capable of delivering tasks on deadline and to Samasource’s quality standards. It also assumes that Samasource can achieve its impact targets through the delivery centers, which likely have incentive to not meet Samasource criteria for employing impact agents or paying wages. There are similar concerns for partner training centers that may not adhere to targeting requirements.
  4. Connectivity and infrastructure: Samasource relies on global connectivity and infrastructure to permit the continued delivery of business process outsourcing (BPO) services to clients around the world from offshore delivery centers.


A number of risks could potentially undermine the impact of Samasource’s program, even if Samasource is successful in providing the promised services to participants.

  1. Business risk. Samasource may not be able to operate sustainably if market conditions in the BPO industry change. Prices for their services could change. The state of the art could make Samasource’s digital task management approach obsolete. 
  2. Labor market risk. Samasource may find that the demand for trained digital microtask workers suffers or that the appeal of digital microtask jobs for young workers declines.
  3. Nontransferable skills. Local opportunities may at some point in the future require different skills of applicants. The sectors where 90% of Samasource agents go on to employment are information technology, administration, customer service, finance and accounting, human resources, sales and marketing and entrepreneurship.2

Reputational risk. Presently, Samasource alumni have high rates of employment and wage growth upon graduation. If, at some point in the future, key employers have bad experiences with Samasource alumni, that could reduce or eliminate one of the central benefits of Samasource, namely to provide better information about job seekers to employers.

Process Metrics

Samasource tracks the participation rates in its program through the following primary metrics:

  • Total employment in Samasource delivery centers
  • Total new hires in Samasource delivery centers
  • Average wages paid to Samasource agents
  • Total trainees that complete Samasource Training

Outcome Metrics

Primary: Annual total earnings after Samasource

Secondary: Rate of full-time employment after Samasource; rate of a combination of part-time employment and further education; and rate of full-time education.

Samasource’s mission is to increase incomes for poor and unemployed youth in developing countries. It does so by providing young workers with experience and marketable skills that then signal their quality to future employers. The success of the Samasource program depends on how alumni fare in the job market. The annual total earnings and rate of employment among all alumni are the best indicators of how well Samasource alumni fare in the job market.

Notably absent from this list of outcomes is the wages provided to agents for Samasource work. Since that work occurs in the course of employment at a delivery center, the wages paid at Samasource are neither a transfer nor a subsidy. They are simply wages paid to agents for work provided. We find no evidence that Samasource wages are substantially in excess of global rates for digital microwork or fair wages in the countries where Samasource operates.

Program Details


Samasource conducts impact sourcing in Haiti, India, Kenya and Uganda. The Samasource approach to impact sourcing is to design and manage projects for client companies, by organizing digital tasks for remote agents.3 Clients typically have headquarters located in the United States. Remote agents are located in offshore delivery centers (DCs) that, until recently, were not directly operated by Samasource. Samasource now outsources through its own DC, SamaDC, as well as partner DCs. The agents in those delivery centers are the beneficiaries.

Figure 1. Locations of current Samasource Delivery Centers

Location of current Samasource Delivery Centers


Samasource is at the validation stage. It has a proven business model and integrates operations across several emerging markets using enterprise-wide systems for business operations. Samasource conducts extensive monitoring. Samasource continues to evaluate major features of its service model, such as the service-level agreements with partner delivery centers and the components of the training program. Although it has successfully replicated its model across several countries, it continues to make fundamental, operational changes to its intervention, such as the direct operation of delivery centers to meet its targeting criteria.

Scale and Age

The primary service delivery of Samasource is the digital microwork training program. That program has employed 8,127 agents since its founding in 2008. Samasource operates a delivery center of its own that has hired more than 700 agents.

Hiring slowed dramatically in 2015. The slowdown is likely a result of the decision to terminate partnerships with a number of delivery centers, which did not share Samasource’s focus on impact agents. Samasource opened its own delivery center in 2015, which likely led to the high number of new agents in 2016 and the lower number of exiting agents.

Impact agents (those that meet Samasource’s criteria for impact) work alongside other agents in the delivery centers, though are generally seated in designated project areas within the delivery center for effective project management.

Figure 2. New Agents and Exiting Agents Annually

New Agents and Exiting Agents Annually

* Based on Quarterly Tenure Tracking. Extrapolated from Q2-Q4 pro rata.
** Based on Quarterly Tenure Tracking.
*** Based on Quarterly Tenure Tracking. Extrapolated from Q1-Q3 pro rata.

Scaling Strategy


Samahope was a crowdfunding platform that directly connected individual small donors to doctors treating birth injuries, birth defects, burns, blindness, and trauma in developing countries. The program also funded health infrastructure projects. It was primarily targeted at addressing the health needs of women and children.4 Samahope was spun off in 2016. For more information on the decision to spin off Samahope, see Learning and Iteration.

Samaschool USA

Samaschool USA trains workers in the United States to become self-employed using internet-based platforms, such as microtask marketplaces and I.T.-enabled services (ITES) apps like Uber and Lyft. Samaschool USA underwent a substantial iteration in 2016, described further in Learning and Iteration.

How Donations Are Used

Samasource is currently directing donations toward impact programming (also known as worker programs) for Impact Sourcing agents and toward the Samasource Training program.

Samasource aims to expand Samasource Training by providing wrap-around services to trainees and graduates, including travel stipends for job interviews, childcare and continued mentorship while graduates transition to seeking employment. Donations will also go toward increasing Samasource Training’s reach to an additional 4,000 women and youth over the next three years. As the training program scales up, Samasource will be investing resources into partnerships with local community organizations and hiring partners, learning technology and web-based teaching tools, and an enhanced curriculum that serves the needs of other industries and job functions.

Quality of Impact Evidence


Samasource is operating at the validation stage. At this stage, Samasource is assessed based on what internal evidence it is producing on its own program and what evidence from elsewhere supports the impact of the program. Samasource has collected data that show participant wages are higher among individuals several years out from its program compared to when they started the program. However, Samasource’s pre-post data have a high risk of bias due to poor contact and response rates for the Post Samasource Survey. Moreover, factors other than Samasource’s intervention itself could be driving the observed wage increase, such as the rise in wages many young workers see in their first years in the labor force, and the high likelihood that those who pass Samasource’s recruitment process have characteristics that make them likely to succeed.

There is some applicable evidence of impact from 12 studies of 13 technical and vocational education and training (TVET) programs. Five studies were randomized controlled trials and seven were quasi-experimental designs with either matching or statistical controls. The program most applicable to Samasource is EPAG, a TVET intervention targeted at female urban youth in Liberia. Adoho et al.’s randomized experiment showed EPAG led to an 80% increase in earnings and a 47% increase in employment, both significant at the 0.1% level. However, the programs evaluated in the 12 studies considered differ substantially from Samasource’s program and provide only medium applicability evidence.

Samasource is planning to conduct a randomized controlled trial of its program. If funding is committed to this study, Samasource’s rating for Quality of Impact Evidence would likely rise at the validation stage.

Table 1. Findings on Quality of Impact Evidence

Evidence Source Finding
Internal Evaluation Low Quality
Independent Validation In Progress
Evidence from Elsewhere Medium Applicability

Internal Evaluation Low Quality

Samasource’s evidence from its own program is of low quality. It estimates the increases in Samasource alumni’s wages using reflexive comparison, meaning a comparison of beneficiaries’ wages before and after participation in the program. Samasource internal personnel conduct this analysis. The sampling strategy is a complete sample of agents upon entry into the program, and a random sample of alumni at annual intervals after they leave the program. The evaluation describes the outcomes of only delivery center agents and not trainees from the related Samasource Training course. Contact rates are less than 50% and likely positively correlated with outcomes in the Post Samasource Survey.

Targeting Effectiveness

Samasource agents are targeted based on their income, employment status, educational background, and in Kenya, whether they live in a designated area of need. Samasource also aims to maintain gender parity in hiring. Over the past year, 77% to 100% of new agents at various delivery centers had pre-Samasource incomes below fair wage levels (weighted average: 78%); 67% to 100% were unemployed or underemployed (weighted average: 71%); 28%-85% had little formal computer experience (weighted average: 57%); and 31% to 60% were female (weighted average: 46%). Based on this data, Samasource is effectively identifying individuals who match its target population.

Samasource is also effectively identifying trainees that match its Samasource Training target population. Tracking data from 2016 show that 76% of 1,756 applicants were designated as belonging in Samasource Training’s primary target group, 23% were in the secondary target group, and less than 2% were either out-of-target/high-risk or not eligible. Preliminary admittance data indicate Samasource does indeed strictly enforce in practice its protocols of not admitting ineligible applicants and only admitting applicants in the out-of-target/high-risk and secondary target groups contingent on instructor approval.

One minor issue that multiple staff members have observed is incidences of applicants gaming the screening criteria and misrepresenting their eligibility for the Samasource Training program. Staff also suspect Samasource Training trainees are using leaked interview answers in order to raise their chances of employment at SamaDC. However, managers do not believe this is a great threat to targeting effectiveness because Samasource works in highly homogenous communities where it would be unlikely to find people from upper-middle-class backgrounds.

Activity Take-Up and Engagement

Samasource tracks the number of total active agents and number of new agents in delivery centers each quarter. The average annual number of agents has been steady since 2012, with a recent increase this year: 797 active agents in 2012, 795 in 2013, 730 in 2014 and 659 in 2015, and 955 for the first three quarters of 2016. The average annual tenure of active agents has seen a similar trend, increasing from 7.1 months in 2012 to 11.7 in 2015. However, data on incoming agents indicates the increase in tenure has not been at the expense of new agents entering the program. Overall, it is evident there is strong demand from participants for the program.

Impact programming at delivery centers is also in high demand, with 237 agents attending a three-day financial literacy training at a delivery center in India and full attendance at substance abuse and sexual harassment awareness sessions in Kenya. Samasource has documented a few poorly attended sessions and plans to take action by building hype at delivery centers about upcoming programs and scheduling sessions more thoughtfully.

Samasource tracks agent performance on SamaHub and has layers of supervisory staff (Team Leads, Quality Analysts and Quality Assurance Managers) to ensure agents are engaging satisfactorily with the microtasks they have been assigned. Samasource is answerable to corporate clients for the quality of agent’s output and all parties understand that non-performance is grounds for agent termination. Samasource therefore places high priority on agent performance as a component of participant engagement.

Samaschool Training enrollment has been steady in the first three quarters of 2016, averaging 219 new trainees each quarter and forecasting 935 for the year. Trainees have solid graduation rates each quarter, averaging 88% in 2015 and forecast to hit or exceed 95% by the end of 2016.


Samasource’s pre-post evaluation indicates Post Samasource Survey respondents have impressive wage gains one to four years after their Samasource tenure.


In an impact benchmarking concept note released in December 2015, Samasource reports a 42% increase in monthly incomes over 5 months and a 184% increase over four years.5 In Samasource’s most recent (2015) published annual report, Samasource reports an 80% increase ($1,714 vs. $954) in annual income after one year for Samasource agents, compared to an expected 15% ($1,098 vs. $954) increase for non-Samasource workers, and a 271% increase ($3,540 vs. $954) in annual income after three years for Samasource agents, compared to an expected 33% ($1,265 vs. $954) increase for non-Samasource workers.5

These results are derived from the reflexive comparison of survey data collected from Samasource agents at baseline, midline and post-Samasource.6 For the Post Samasource Surveys, alumni are contacted by phone at annual intervals following program completion and self-report their current employment status, whether they are pursuing further education, and current earnings. In addition, working alumni report their sector (formal versus informal employment) and industry of employment. Among 2015 Post Samasource Survey respondents, 52% were working, 24% were both working and pursuing further education and 8% were only pursuing further education at the time they were contacted. Just 16% of respondents were both not working and not pursuing further education at the time of the survey.


While the contact rates and completion rates for surveys of current delivery center agents are nearly 100%, the contact and completion rates for the Post Samasource Survey are far lower and therefore raise concerns about risk of bias. The Post Samasource Survey has a completion rate of 80% among those who can be successfully contacted. Just 40% of Samasource alumni, however, can be successfully contacted for the survey. Of the total number of individuals sampled, 32% are both successfully contacted via phone and agree to complete the survey and 8% are successfully contacted but refuse to complete the survey. The unsuccessful contact (non-contact) rate is largely determined by the accuracy of the respondent’s phone number in Samasource records and the stability of the respondent’s phone number over the intervening years. Non-contact is likely to be correlated with the primary outcomes of interest: wages and employment. There is a clear risk that respondents who can be reached are more likely to be employed and earning higher wages than non-contacts. This is further discussed in Quality of Monitoring Systems.

Even if contact rates and response rates were high, reliance on pre-post data in a complex context like Samasource’s has several fundamental limitations related to the assumption that employment and earnings would have remained constant over time in the absence of Samasource’s intervention.

First, for most program participants, Samasource is essentially their first entry into the formal labor force. In their age range and at this early point in their careers, the marginal returns to each additional year of work experience gained are high and an increase in wages over time is to be expected. Empirical evidence from Germany shows that the returns to work experience are steepest during the first six years after entering the workforce and almost flat thereafter.7 Evidence from low- and lower-middle-income countries (Malawi8 and Indonesia9) also suggests positive returns to work experience for workers entering the formal sector. The rise in program participants’ wages found in Samasource’s pre-post comparison could just be reflecting the increase that would have happened in absence of the Samasource program rather than the increase attributable to Samasource.

Second, the low-income population that Samasource targets is particularly vulnerable to relatively large-scale fluctuations in income, whether as a result of negative external shocks or windfall gains. Samasource’s pre-post comparison may simply be capturing an incidental spike in income. The body of evidence for microcredit interventions illustrates an important lesson: while pre-post comparisons suggested dramatic increases in income for borrowers, seven experimental studies with strong counterfactuals did not find significant impacts on borrowers’ average household income.10 Indeed, data from the Kenya Bureau of Statistics show relatively large and unpredictable fluctuations in real average earnings per employee from year to year, falling by as much as 8.3% between 2010 and 2011 and increasing by as much as 10.7% between 2012 and 2013.11

Third, those who enter the Samasource program likely have a number of observable and unobservable characteristics that increase their propensity to find employment and earn higher wages than the average for their demographic. The young people who find out about Samasource through local community-based organizations or from their social networks, successfully complete Samasource application forms and perform well in screening interviews may inherently have more motivation, entrepreneurial ability and better connections. This would result in inflated participant outcomes above those of the average young person.

Samasource is transparent about the limitations of its current methodology, but has stated that in the past, organizational capacity for conducting a more rigorous evaluation was constrained.6 Randomized controlled trials, and especially those with long follow-up periods, are undoubtedly a considerable investment. In lieu of experimental evidence from a trial, quasi-experimental evaluations can be valuable suggestive indicators of impact when conducted well. Pre-post comparisons are among the most basic of quasi-experimental methods; there are other, more robust methods that Samasource could attempt without having to invest in a trial, for example, by measuring simple difference or differences-in-differences using a comparison group of non-participants, or by conducting statistical matching to construct a comparison group similar in selected characteristics to the treatment group.

For these reasons, the wage differentials Samasource currently reports as attributable to its program are not a strong measure of Samasource’s impact.

Displacement Effects

Impact audits typically consider displacement as a factor that affects Cost of Impact; displacement may reduce the observed benefits, but is unlikely to totally offset impact. However, with vocational education and training programs, there is the potential that the benefits from the program may be the result of displacing others in the market. For instance, a program that helps candidates prepare resumes may make those candidates more attractive to employers, but it will not necessarily increase how many people that employer hires, and so the hiring of program beneficiaries may directly translate to other equally deserving individuals not getting hired.

Estimating displacement effects is particularly challenging and costly, and Samasource has not collected the necessary data to do so. However, the concern with displacement can be reduced if it is clear that the program is helping individuals secure jobs that are typically performed by others with greater opportunities. For instance, if an individual with no educational background displaces a college student working a summer job, we will likely be less concerned with the effects of that displacement.

Therefore, a key question is whether Samasource is employing individuals who otherwise would not be able to secure BPO jobs. If Samasource is employing workers with lower earnings potential than those typically employed in BPO, this would reduce the concern about displacement, as those potentially displaced would have greater prospects than those workers displacing them.

To answer this, we consider three key questions:

  1. Does Samasource employ people who would not otherwise find similar jobs performing microwork?
  2. Does Samasource displace other workers?
  3. Do the workers employed in (1) have lower ability to earn than the workers displaced in (2)?

Answering (1) comprehensively would require experimental evidence on the employment decisions of statistically equivalent populations, who fit Samasource’s targeting criteria, and who are randomly assigned to receive or not receive the Samasource intervention. These data are not available. However, Samasource data show that 92% of agents were underemployed prior to Samasource.12 Samasource makes the case that, without its intervention, these people would continue to be underemployed.

Answering (2) requires an understanding of Samasource’s competitive position in the online outsourcing market, as well as research into the outsourcing and offshoring decisions of Samasource corporate clients. Based on Samasource client case studies and other anecdotal accounts, it appears Samasource does win contracts that would go to other vendors, including for-profit crowdsourcing task marketplace vendors like Amazon Mechanical Turk, oDesk and CrowdFlower and traditional BPO-ITES firms.13–15 In this way, Samasource is operating in a similar way as for-profit vendors. Samasource is competitive against other vendors because of the quality of its work output, pricing of services, customizable/flexible and consultative service and fulfillment of clients’ corporate social responsibility objectives.15–17 While other vendors also compete along the first three dimensions, Samasource’s ability to fulfill CSR objectives could be interpreted as its unique selling point in the market.

Answering (3) requires an understanding of the populations Samasource is employing and displacing. First, based on the available evidence, Samasource is likely responsible for transferring jobs from workers employed by other vendors in developing countries to Samasource agents also in developing countries rather than between workers in developed and developing countries. In two interviews, one with ImpactMatters and one with a third-party,18 Samasource explains that developing country microworkers outcompete developed country microworkers due to lower wages demanded, and that the low pay for microwork can provide a living wage in developing countries but not in developed countries. Consistent with this general conclusion is a study of Amazon Mechanical Turk microworkers in India and in the United States19 that suggests American microworkers are more likely to perceive microwork earnings as “extra” earning rather than a primary source of income.

Second, based on the available evidence, Samasource’s transfer of jobs likely happens between socioeconomic classes within developing countries. For lower-middle class youth in India, BPO-ITES is seen as a lucrative employment opportunity. According to one sociological study of the Indian IT industry, this population would otherwise be “unemployed or working in low-paid service or clerical jobs in the domestic sector.”20 For middle-class fresh graduates in India, working in BPO-ITES is seen as a “convenient stopgap to earn money before or while pursuing higher studies.” If the same pattern of relative economic benefit for different socioeconomic classes exists between extremely poor youth and lower-middle class youth in the areas in which Samasource operates, it may be that Samasource is employing poorer workers than the typical BPO firm.

See the section on the Business Process Outsourcing Market for more details.

Based on this analysis, it appears the concern with Samasource’s displacement is lessened because it (1) provides work to otherwise underemployed people, (2) does not displace workers by a magnitude that would not otherwise happen in the competitive online outsourcing market, and (3) given there is evidence that BPO-ITES work represents relatively greater economic benefit to workers of lower socioeconomic class, the relative gains for Samasource’s target population may outweigh the job losses of higher socioeconomic class workers.

Independent Validation In Progress

Nonprofits that are in the validation stage, such as Samasource, are evaluated on whether they are producing high-quality internal evidence and have high-applicability external evidence. Internal evidence is derived from the nonprofit’s own internal evaluation and independent validation of the nonprofit’s program by third-party evaluators.

Samasource will be carrying out a randomized controlled trial in Kenya with third-party evaluator Innovations for Poverty Action (IPA). To gather information about the forthcoming study, ImpactMatters reviewed a Memorandum of Understanding outlining the basic study design and the agreement between Samasource and IPA, and conducted interviews with Samasource management. Based on the information reviewed, ImpactMatters anticipates the study has the potential to produce high-quality evidence of impact for Samasource’s program. However, because the study is still at the planning stage and sufficient documentation of study protocols has not been available, the quality of the forthcoming Samasource study is considered indeterminate.

Samasource is planning to conduct a randomized controlled trial of its program. If funding is committed to this study, Samasource’s rating for Quality of Impact Evidence would likely rise at the validation stage.

Table 2. Findings on Independent Validation Studies

Forthcoming Samasource StudyIndeterminateIndeterminate

Forthcoming Samasource Study

Table 3. Details of Forthcoming Samasource Study

TimeframeJanuary 2017 - December 2018
InterventionSamasource Training and Impact Sourcing
MethodMultiple treatment arm randomized controlled trial. A sample of eligible Samasource Training applicants will be randomized into one of three groups: a control group that receives neither Samasource Training nor Impact Sourcing; one group that receives the Samasource Training course but no subsequent referral for employment as an Impact Sourcing agent; and one group that receives both the Samasource Training course and a referral for Impact Sourcing employment. Applicants will be followed for at least a year after completion of the Samasource Training course.
SampleApproximately 275 participants per study arm (825 in total sample)
GeographyKenya, in the catchment areas from which Samasource Training usually draws
EvaluatorInnovations for Poverty Action
InvestigatorsDavid Atkin and Antoinette Schoar
StatusPlanning stage; design of baseline instrument is in progress.

Evidence from Elsewhere Medium Applicability

Programs like Samasource are rare among technical and vocational education and training (TVET) programs. TVET includes a wide range of different interventions. Some programs conduct classroom trainings and online courses. Other programs provide apprenticeships, certifications or vouchers to pursue training on the open market. TVET can also include training for business training for micro, small and medium enterprise owners. Samasource differs from the majority of these programs in that its trainees do not learn a skilled trade. Most evidence about TVET programs is irrelevant to the Samasource model, which turns underemployed urban youth into experienced, entry-level hires for information technology, clerical, sales and administrative jobs.

The few studies that describe the impact of similar interventions are of high quality and show positive but limited results. A Campbell systematic review of TVET is underway but not yet published.21 The protocol for the review discusses three non-systematic reviews, which address a broader TVET literature and touch briefly on studies of interventions that are similar to Samasource.22-24 In the discussion that follows, programs similar to Samasource are those that provide on-the-job training for urban youth in similar entry-level jobs with minimal computer literacy requirements; and not apprenticeships, certifications, or training for entrepreneurs, the self-employed and small business owners. An earlier Campbell systematic review is discussed in the findings below.25 Systematic reviews reduce publication bias by reviewing a cross section of registered study protocols, rather than final published results. Randomized controlled trials evidence from Colombia,26 Dominican Republic,27 Liberia,28 and Malawi29 are the most relevant available evidence for programs similar to Samasource.

Table 4. Findings on Evidence from Elsewhere

Adoho, Chakrabarty, Korkoyah et al.HighMediumMedium
Attanasio, Kugler and MeghirHighLowLow
Card, Ibarrarán and RegaliaLowLowLow
Cho, Kalomba, Mobarak and OrozcoMediumLowLow
Tripney, Hombrados, Newman, et al.HighLowLow


For details on the analysis for each randomized trial, see Study Reviews.

Adoho et al. 201428

Table 5. Details of Adoho et al. 2014

InterventionEconomic Empowerment of Adolescent Girls and Young Women (EPAG)
EvaluatorIndependent evaluators
MethodRandomized controlled trial
Sample1,273 young women assigned to treatment group; 808 assigned to control
InvestigatorsFranck Adoho, Shubha Chakravarty, Dala T. Korkoyah, Mattias Lundberg, Afia Tasneem
PublicationAdoho F, Chakravarty S, Korkoyah DT, Lundberg M, Tasneem A. The Impact of an Adolescent Girls Employment Program: The EPAG Project in Liberia [Internet]. The World Bank; 2014 Apr [cited 2016 Oct 19]. Available from:

Study Abstract

This paper presents findings from the impact evaluation of the Economic Empowerment of Adolescent Girls and Young Women (EPAG) project in Liberia. The EPAG project was launched by the Liberian Ministry of Gender and Development in 2009 with the goal of increasing the employment and income of 2,500 young Liberian women by providing livelihood and life skills training and facilitating their transition to productive work. The analysis in this paper is based on data collected during two rounds of quantitative surveys in 2010 and 2011, the second of which was conducted six months after the classroom-based phase of the training program ended. Strong impacts are found on the employment and earnings outcomes of program participants, relative to a control group of non-participants. The EPAG program increased employment by 47 percent and earnings by 80 percent. In addition, the impact evaluation documents positive effects on a variety of empowerment measures, including access to money, self-confidence, and anxiety about circumstances and the future. The evaluation finds no net impact on fertility or sexual behavior. At the household level, there is evidence of improved food security and shifting attitudes toward gender norms. These results reinforce the highly positive feedback received from focus group discussions with program participants. Finally, preliminary cost-benefit analysis indicates that the budgetary cost of the EPAG business development training for young women is equivalent to the value of three years of the increase in income among program beneficiaries. These preliminary results provide strong evidence for further investment and research into young women’s livelihood programs in Liberia.

Findings (at two-year endline)

The treatment effects from the program are reported here simply as “increases” for simplicity, and all are significant at the 0.1% level. Total earnings increased by 80% relative to the control group. At baseline, mean weekly earnings were $22.71 PPP, and weekly earnings for the treatment group increased by $16.60 PPP more than for the control group. Participation in income generating activities, which was 38.1% at baseline, increased by 18.1% more for the treatment group than for the control group, which translates to a 47% increase in the treatment group’s participation in income generating activities.

Quality Of Evidence High

Table 6. Adoho et al. 2014 Review Findings

Risk of bias associated with 
Random sequence generation:Low
Allocation concealment:Low
Blinding of participants and personnel:Low
Blinding of outcome assessment:Uncertain
Incomplete outcome data:Low
Selective reporting:Low
Study design uses 
Appropriate level of random assignment:Yes
Adequate sample size:Unclear
Validated measure of the outcome:Yes
Appropriate follow-up period:Yes

Relevance Medium

Similarity of models

The Samasource model includes targeting and training, which may be through the ten-day Samasource Training course or on-the-job training as a Samasource agent, with additional training in soft skills and personal development. Complementary services, such as childcare and transportation to work are, also offered to facilitate working and training at Samasource. The EPAG program in Adoho et al. consists of targeting and six months of classroom-based training followed by a six-month job placement in either self-employment or wage employment, with similar complementary services also offered.

Training content: Samasource Training and Samasource on-the-job training focus on digital literacy and the specific digital microtasking skills, such as image annotation and machine learning, that are in-demand from Samasource corporate clients. By contrast, EPAG offered two tracks of training: job skills (including hospitality, professional cleaning, office/computer skills, professional painting, security guard services, professional driving) and business development services (microenterprise development and management). However, a key similarity between the job skills track in EPAG and Samasource is that both are highly market-driven. Curricula in both programs are designed to reflect market conditions and demands. Moreover, both programs include not only training in hard skills, but also in soft skills and life skills.

Training duration: Samasource Training is much shorter than the six months of classroom-based training offered in Adoho et al.’s study. However, it is not clear whether the six-month job placement in Adoho et al. offers as much training and supervision as agents receive while on-the-job at Samasource.

Employment duration: Adoho et al.’s EPAG program guaranteed six months of employment, whereas the average agent’s tenure at Samasource in 2015 was, at 11.9 months, almost twice as long.

Targeted population

Adoho et al.’s EPAG program targeted young women aged 16 to 27, who had basic literacy and numeracy skills and who were not enrolled in school several months prior to the program (to avoid incentivizing dropping out of school). EPAG recruited from nine target communities in Liberia. Samasource’s eligibility criteria filter in a very similar population of youths aged 18 to 30 who are unemployed or informally employed, earn below a specific income threshold, have fulfilled at least basic high school requirements and, for those applicants in Kenya, live in areas identified to be low-income or informal settlements.

Local context

Liberia, where EPAG is implemented, has a similar per capita national income to countries where Samasource operates. Samasource operates in Kenya, Uganda (low-income country), India (lower-middle-income country) and Haiti (low-income country).30 Liberia is a low-income country. Samasource Training operates in Kenya. A comparison of Kenya and Liberia reveals a similar age structure, with a notably large youth population, and high unemployment rates in both countries. The dominant occupation is in small-scale agriculture across the two countries.31

Attanasio et al. 201126

This RCT of a job training program in Colombia showed significant wage gains from vocational training, particularly among women.26 The probability of employment upon completion also rose but perhaps not as much as might have been hoped, with treatment effects of about 7% for female and 2% for male participants.

Table 7. Details of Attanasio et al. 2011

InterventionJóvenes en Acción – Subsidized Vocational Training for Disadvantaged Youth
EvaluatorSistemas Especializados Informacion (SEI) S.A.; Econometria Consultores
MethodRandomized controlled trial
Sample2,040 youths assigned to treatment group; 2,310 assigned to control
InvestigatorsOrazio Attanasio, Adriana Kugler, and Costas Meghir
PublicationAttanasio O, Kugler A, Meghir C. Subsidizing Vocational Training for Disadvantaged Youth in Colombia: Evidence from a Randomized Trial. Am Econ J Appl Econ [Internet]. 2011 Jul [cited 2016 Oct 19];3(3):188–220. Available from:

Study Abstract

This paper evaluates the impact of a randomized training program for disadvantaged youth introduced in Colombia in 2005. This randomized trial offers a unique opportunity to examine the impact of training in a middle-income country. We use originally collected data on individuals randomly offered and not offered training. The program raises earnings and employment for women. Women offered training earn 19.6 percent more and have a 0.068 higher probability of paid employment than those not offered training, mainly in formal-sector jobs. Cost-benefit analysis of these results suggests that the program generates much larger net gains than those found in developed countries.

Findings (at 13-15 month endline)

The TVET program had a treatment effect of 20% increase in total earnings from wage and salary above the control baseline mean of $120.53 per month, calculated in USD at purchasing power parity (USD PPP). Formal salary showed a 33% increase treatment effect, relative to a control endline mean of $102.66 (USD PPP). Employment showed a 5.4% treatment effect (baseline control mean 46%). Paid employment showed a 6.8% treatment effect (baseline control mean 32.8%). Formal employment showed a 6.9% treatment effect (baseline control mean 6.1%). All findings were significant at the 5% level.

Quality Of Evidence High

Table 8. Attanasio et al. 2011 Review Findings

Risk of bias associated with 
Random sequence generation:Low
Allocation concealment:Low
Blinding of participants and personnel:Low
Blinding of outcome assessment:Uncertain
Incomplete outcome data:High
Selective reporting:Low
Study design uses 
Appropriate level of random assignment:Yes
Adequate sample size:Yes
Validated measure of the outcome:Yes
Appropriate follow-up period:Yes

Relevance Low

Similarity of models

Attanasio et al.’s Jóvenes en Acción provided three months of subsidized classroom training with private training institutions and three months of on-the-job training during unpaid internships with legally registered companies. In addition, Jóvenes en Acción included a cash transfer of about US$2.20 to $3.00 per day for transportation, lunch, and childcare for those with young children. Samasource offers similar classroom training through Samasource Training and on-the-job training through Samasource employment, with several key differences:

Training content: The courses offered in Attanasio et al. prepared trainees for jobs as varied as florists, library assistants, industrial production operators and cattle farmers. Samasource offers training specifically for digital microwork and online freelancing.

Training duration: Attanasio et al.’s Jóvenes en Acción provided three months of classroom training and, during their internships, participants received an average of 5.19 hours a day of on-the-job training. The Samasource Training course spans only ten days, but Samasource agents receive continual on-the-job training as long as they are employed at Samasource, which recent monitoring data reveals to be almost a year on average.

Employment duration: A key difference between Attanasio et al’s program and Samasource is that participants in the former were not offered employment per se, but unpaid internships. Taking into account the opportunity cost of the unpaid internship (most importantly, lost earnings), this difference in program design has potentially large negative consequences for participant outcomes compared to Samasource’s program.

Targeted population

Attanasio et al.’s Jóvenes en Acción targeted young people aged 18 to 25, who were unemployed and belonged to the two lowest deciles of the income distribution in Colombia. This is similar to Samasource’s targeting criteria, with the major difference being absence of a basic high school education requirement in Jóvenes en Acción. Indeed, participants in Jóvenes en Acción have an average of just ten years of education at baseline and have dropped out of high school.

Local context

There are notable differences between the local contexts of Samasource and the Attanasio et al. study. Samasource works in low-income and lower-middle-income countries, whilst Jóvenes en Acción took place in Colombia, an upper-middle-income country. Taking Kenya as the country in which Samasource has reached the greatest scale, the age structure in Kenya is much more skewed toward youths than in Colombia, and unemployment is far more prevalent in Kenya than in Colombia. The vast majority of the Kenyan labor force also works in small-scale agriculture, while the majority of the Colombian labor force works in services.31

Card et al. 201127

This RCT of a job training program in the Dominican Republic found no significant effects on employment and modest impact on wages, approximately a 10% improvement conditional on current employment.27

Table 9. Details of Card et al. 2011

InterventionJuventud y Empleo – Youth Job Training Program
EvaluatorIndependent evaluators
MethodRandomized controlled trial
Sample5,801 youths assigned to treatment group; 2,564 youths assigned to control
GeographyDominican Republic
InvestigatorsDavid Card, Pablo Ibarrarán, Ferdinando Regalia, David Rosas-Shady, Yuri Soares
PublicationCard D, Ibarrarán P, Regalia F, Rosas-Shady D, Soares Y. The Labor Market Impacts of Youth Training in the Dominican Republic. J Labor Econ [Internet]. University of Chicago PressChicago, IL; 2011 Apr [cited 2016 Oct 20];29(2):267–300. Available from:

Study Abstract

We report the impacts of a job training program operated in the Dominican Republic. A random sample of applicants was selected to undergo training, and information was gathered 10–14 months after graduation. Unfortunately, people originally assigned to treatment who failed to show up were not included in the follow-up survey, potentially compromising the evaluation design. We present estimates of the program effect, including comparisons that ignore the potential nonrandomness of “no-show” behavior, and estimates that model selectivity parametrically. We find little indication of a positive effect on employment outcomes but some evidence of a modest effect on earnings, conditional on working.

Findings (at six month endline)

Treatment effects here are reported simply as “increases” for simplicity. Results do not differ significantly from zero. Monthly earnings rose by 10%, relative to a control mean of $215.87 (USD PPP) at endline. Employment increased by 1.4%, with a control mean of 3.4% at baseline.

Quality Of Evidence Low

Table 10. Card et al. 2011 Review Findings

Risk of bias associated with 
Random sequence generation:Low
Allocation concealment:High
Blinding of participants and personnel:Low
Blinding of outcome assessment:Uncertain
Incomplete outcome data:High
Selective reporting:Low
Study design uses 
Appropriate level of random assignment:Yes
Adequate sample size:No
Validated measure of the outcome:Yes
Appropriate follow-up period:Yes

Relevance Low

Similarity of models

Card et al.’s Juvenud y Empleo program includes targeting and a maximum of 350 hours of training provided by private training institutions, followed by two-month internship. Training covered both basic skills training to strengthen self-esteem and work habits as well as technical/vocational training designed to meet the needs of local employers. Each private training institution was required to have a formal agreement from at least one local firm to offer graduates of their training programs two-month internships in exchange for Juvenud y Empleo’s full subsidization of the interns’ wage costs. Participants were not paid during the classroom training phase, but received about US$40 a month to offset the costs of transportation and meals. Samasource Training also offers a classroom component, often followed directly by employment at Samasource. Whereas Attanasio et al.’s program in Colombia offered unpaid internships, the paid internships in Card et al.’s program more closely mirror employment at Samasource.

Training content: Juvenud y Empleo offered training in a variety of job-specific, but Card et al. do not describe the sets of skills and career paths made available to participants. Based on the information available, it is unlikely that they match the IT-centric training at Samasource.

Training duration: Samasource Training is substantially shorter than the classroom-based training in Juvenud y Empleo.

Employment duration: Though training duration between the two programs is similar, employment duration is less so: almost all Juvenud y Empleo interns were not subsequently hired by the local firms, and thus their duration of employment in the program was only two months, compared to the 11.9-month average at Samasource.

Targeted population

Card et al.’s Juvenud y Empleo program was targeted at low-income youths in the Dominican Republic aged 16 to 29, who had at most 11 years of education and who were not currently enrolled in regular school. Samasource reaches a similar target population, with the important exception of basic high school education as a prerequisite for applicants.

Local context

Samasource operates in low- and lower-middle-income countries, while Juvenud y Empleo was implemented in the Dominican Republic, an upper-middle-income country. The differences between Kenya, one of Samasource’s intervention sites, and the Dominican Republic parallel the differences discussed above between Kenya and Colombia: Kenya has more youth, higher unemployment, and a greater share of labor in agriculture.32

Cho et al. 201329

This RCT of a job training program in Malawi failed to produce any additional employment or income, and suffered from high dropout rates.29

Table 11. Details of Cho et al. 2013

InterventionTechnical Education and Vocational Training Authority (TEVETA) youth apprenticeship program
EvaluatorIndependent evaluators
MethodRandomized controlled trial
Sample1,254 youths assigned to treatment group; 646 assigned to control
InvestigatorsYoonyoung Cho, Davie Kalomba, Ahmed Mushfiq Mobarak, Victor Orozco
PublicationCho Y, Kalomba D, Mobarak AM, Orozco V. Gender Differences in the Effects of Vocational Training Constraints on Women and Drop-Out Behavior [Internet]. Washington DC; 2013 [cited 2016 Oct 19]. Report No.: WPS 6545. Available from:;sequence=1

Study Abstract

This paper provides experimental evidence on the effects of vocational and entrepreneurial training for Malawian youth, in an environment where access to schooling and formal sector employment is extremely low. It tracks a large fraction of program drop-outs—a common phenomenon in the training evaluation literature—and examines the determinants and consequences of dropping out and how it mediates the effects of such programs. The analysis finds that women make decisions in a more constrained environment, and their participation is affected by family obligations. Participation is more expensive for them, resulting in worse training experience. The training results in skills development, continued investment in human capital, and improved well-being, with more positive effects for men, but no improvements in labor market outcomes in the short run.

Findings (at four month endline)

Treatment effects are reported here as “increases” (or “decreases”) for simplicity. None of the treatment effects differ significantly from zero. Total earnings decreased by 20%, relative to a control mean of $7.70 per week (USD PPP), or about $400 per annum. Monthly expenses decreased by 16%, relative to a control mean of $30.46 per week. Time spent working increased by 25% relative to a control mean of 6.15 hours at endline.

Quality Of Evidence Medium

Table 12. Cho et al. 2013 Review Findings

Risk of bias associated with 
Random sequence generation:Low
Allocation concealment:Low
Blinding of participants and personnel:Low
Blinding of outcome assessment:Uncertain
Incomplete outcome data:High
Selective reporting:Low
Study design uses 
Appropriate level of random assignment:Yes
Adequate sample size:No
Validated measure of the outcome:Yes
Appropriate follow-up period:No

Relevance Low

Similarity of models

The TEVETA intervention in Cho et al. includes targeting and on-the-job training by placing participants into apprenticeships with master craftspeople. Apprenticeships were unpaid, but participants received a small US$28 stipend for meals and accommodation. The most important differences between the TEVETA program and Samasource are as follows:

Training content: TEVETA participants in Cho et al. chose from among 17 different trades, including clothing fabrication, auto mechanics, metalwork, beauty-related trades and construction. By comparison, Samasource’s training is specific to digital microwork and online freelancing. Moreover, Samasource offers ten days of classroom-based training. 75% of Samasource agents in Kenya do begin as trainees in the Samasource Training program; the progression from classroom-based training to on-the-job training is therefore the principal way that the Samasource program is completed in practice. On the other hand, the TEVETA training is purely on-the-job.

Training duration: Apprenticeships lasted less than three months in the TEVETA program, while on-the-job training at Samasource lasts as long as workers’ tenure as agents, which is typically 11.9 months.

Employment duration: TEVETA apprenticeships were unpaid and only 1.2 to 3.9% of participants received paid work from master craftspeople after their apprenticeships. Meanwhile, Samasource pays agents wages above national Fair Wage Guide requirements for almost a year.

Targeted population

Cho et al.’s TEVETA program targeted low-income youths aged 15 to 24, with a special focus on orphans and/or school dropouts. While only 26% of the TEVETA control group had completed secondary education at baseline, Samasource requires that applicants have at least a basic high school education. Samasource also does not specifically target orphans and school dropouts.

Local context

Malawi, Cho et al.’s implementation site, is classified as a low-income country, similar to Samasource’s countries of operation. Malawi also has a very large youth population in proportion to its total population and, as in Kenya, a high percentage of the labor force works in agriculture.32

Tripney et al. 201325

This Campbell Systematic Review showed positive and significant effects on employment and formal sector employment; with heterogeneity.25 Positive and significant effects on monthly earnings and weekly hours worked; but not self-employment earnings.

Table 13. Details of Tripney et al. 2013

InterventionTechnical and Vocational Education and Training (TVET) interventions
TimeframeIncludes studies published between 2000 and 2011
EvaluatorThe Campbell Collaboration/Independent evaluators
MethodSystematic review with meta-analyses
Sample26 studies (3 randomized controlled trials; 23 quasi-experimental studies)
GeographyEleven upper-middle income countries (Argentina; Bosnia and Herzegovina; Brazil; Chile; China; Colombia; Dominican Republic; Latvia; Mexico; Panama; Peru); two lower-middle income countries (India; Bhutan); one low-income country (Kenya)
InvestigatorsYJanice Tripney, Jorge Garcia Hombrados, Mark Newman, Kimberly Hovish, Chris Brown, Katarzyna Steinka-Fry, Eric Wilkey
PublicationTripney J, Hombrados JG, Newman M, Hovish K, Brown C, Steinka-Fry K, et al. Post-Basic Technical and Vocational Education and Training (TVET) Interventions to Improve Employability and Employment of TVET Graduates in Low-and Middle-Income Countries: A Systematic Review. Campbell Syst Rev [Internet]. 2013 [cited 2016 Oct 19];9(9). Available from:

Study Abstract

The studies included in this systematic review represent the best empirical evidence currently available for the impact of Technical and Vocational Education and Training (TVET) on youth employment outcomes. As the review improves upon prior work by statistically synthesising TVET intervention research, its findings strengthen the evidence base on which current policies and practices can draw. That being said, interpreting the evidence and drawing out the implications for policy and practice is nonetheless challenging. […] In summary, the existing evidence shows that TVET interventions have some promise. Overall, interventions included in this review were found to demonstrate a small, positive effect on all but one of the employment outcomes measured, with the strength of the evidence being stronger for formal employment and monthly earnings than for the other outcomes measured. Furthermore, TVET appears to increase the number of hours worked in paid employment by young women but not young men. Thus, it is both important and worthwhile to continue to invest in TVET provision for youth in developing countries. Although, statistically, the overall effects of TVET may be small, even a small increase in the rate of paid employment, for example, could translate into large numbers of young people entering the labour market, where programmes are delivered nationally.


Treatment effects of TVET from this meta-analysis are reported here as “increases” for simplicity. Earnings increased by 12.7%, with a 95% confidence interval from 0.045 to 0.21. Paid employment increased by 13.4%, with a 95% confidence interval from 0.024 to 0.243. Formal sector employment increased by 19.9%, with a 95% confidence interval from 0.055 to 0.344.

Quality Of Evidence High

Table 14. Tripney et al. 2013 Review Findings

Well-scoped review question;
well-documented and comprehensive search
Appropriate treatment of data2/2
Appropriate synthesis of data2/2
Sound inference2/2
Total assessment:10/10

Relevance Low

Similarity of models

Of the 11 interventions included in Tripney et al.’s meta-analyses, seven were two-phase programs with classroom-based theoretical training followed by an internship providing on-the-job practical training; two were vocational training programs; one was a technical and vocational vouchers program; and one provided on-the-job training only. The two-phase program design bears the most resemblance to Samasource’s program, where 75% of Samasource agents in Kenya have been funneled from the classroom-based Samasource Training program into on-the-job training while employed with Samasource.

Training duration: Of the seven two-phase programs, classroom training duration ranged from just 120 hours to 773 hours of training, and the duration of the internship component – paid or unpaid – ranged from 360 hours to six months.

Training content: The content of training did not always cover soft skills and job readiness, and no programs in Tripney et al. had Samasource’s prominent focus on digital microwork and online freelancing skills.

Targeted population

Of the 11 interventions in Tripney et al.’s meta-analyses, most targeted unemployed, low-income youths below the age of 30. However, only one reflected Samasource’s targeting of youths who had completed secondary education.

Local context

Tripney et al.’s meta-analyses were dominated by upper-middle-income country interventions; only one, a technical and vocational vouchers program in Kenya, took place in a low-income country.

Cost of Impact


Samasource produces a predicted average impact of $1,150 in additional total earnings per participant in the first three years following completion of the program, meaning after work at Samasource is complete. This predicted impact is calculated on the basis of Samasource’s surveys of its workforce in the Baseline Survey and Post Samasource Survey from India and Kenya.

ImpactMatters’ prediction of impact uses a conservative assumption to adjust for some of the flaws in Samasource’s survey data: we assume that beneficiaries who do not respond to the Post Samasource Survey have the same earnings as before they worked at Samasource. This methodology cannot give an accurate estimate of Samasource’s impact on earnings in the way that a well-designed evaluation would. Since the results are a blend of careful statistical modeling and pure assumptions, they do not have statistically meaningful confidence intervals or p-values. Nonetheless, this methodology addresses some of the issues with the Samasource data, summarized below.

The Samasource survey data present three key challenges. First, the surveys do not have a comparison group, so all comparisons derived from these surveys use a reflexive comparison, one of the weakest counterfactual designs. We therefore have low confidence that the rise in earnings reported by Samasource’s evaluation team is attributable to the training and work experience received on the job. Second, Samasource also reports wages earned at Samasource as evidence of impact. Effectively, this gives Samasource direct influence over its stated measure of impact, which does not reflect beneficiary outcomes on the labor market. Simply by raising wages and retaining employees longer, Samasource could increase its purported impact, without improving the earnings of Samasource alumni. Third, the low response rates in the Post Samasource Survey likely introduce upward bias into the estimate of alumni earnings. For these reasons, the rise in earnings Samasource reports is an inaccurate estimate of how much beneficiaries’ earnings rose in the labor market.

In this situation, ImpactMatters may choose to use more rigorous evidence from elsewhere rather than present results derived from the Post Samasource Survey. However, the external randomized trial or quasi-randomized trial literature does not provide an impact estimate of a truly comparable program. The closest comparison available, a randomized controlled trial of the Liberia EPAG program discussed in Evidence from Elsewhere, is used in scenario analysis but not in developing the primary estimate of impact. The analysis that follows is therefore based on Samasource survey data, despite the limitations, using a methodology that compensates for some of the bias in Samasource’s measure of its own impact.

The impact of the program is about 64% of the average cost of Samasource per beneficiary, which is the cost of impact ratio (COI). The higher the COI, the greater the impact for a dollar invested in the charity. Samasource is very unusual in that the majority of its revenues come from commercial activity that is intrinsic to the intervention. It would not be possible to effectively train beneficiaries for the workplace without the commercial activity of Samasource. The 64% COI ratio should not be compared to COI ratios for nonprofits that use mostly charitable or government funds to provide services to beneficiaries.

The simple cost of impact (SCI), 65%, and donor’s cost of net impact (DCNI), 63%, are very close to ImpactMatters’ preferred metric, the cost of impact (COI), 64%. All three ratios use the same assumptions about the duration of benefits and the social discount rate. SCI differs from COI in that it ignores costs paid by beneficiaries to access a program. For Samasource, that means the value of the time beneficiaries spend in unpaid training programs is omitted from the cost calculation. DCNI differs from both COI and SCI in that the same costs borne by beneficiaries are subtracted from the value of benefits, rather than added to the total cost of the intervention.

In addition to the level of wages, Samasource also tracks the how agents spend their money: such as savings, health care, rent (lodging) and remittances. These outcomes, however commendable, are not reflected in the cost of impact calculations here. Increases in spending on particular types of goods and services would entail double-counting benefits financed through higher wages. Changes to the percent of the household budget allocated to health (or education or financial services) could also be a result of higher wages; and in any case ImpactMatters does not consider those percentages evidence of impact.

Samasource operates at an average cost per beneficiary of $1,800. Average cost per beneficiary is the entirety of Samasource direct and indirect costs of impact sourcing, less commercial revenues earned on operating activities, plus any costs paid by partners and beneficiaries, and divided by the number of beneficiaries that complete Samasource’s employment program each year. Because Samasource is a hybrid commercial enterprise, the value of commercial revenues is deducted from the total expenses of the organization, in order to arrive at a measure of charitable funds committed to current operations.

On average, 1,305 participants annually complete their employment at Samasource delivery centers. This number and the number of new hires fluctuate each year.

Table 15. Findings on Cost of Impact

Average Increase in Total Earnings (1 Year)$400
Average Increase in Total Earnings (Cumulative)$1,150
Average Cost per Participant$1,800
Cost of Impact0.64


Impact (42%). The increase of 42% conservatively estimates the average rise in earnings on the basis of Samasource’s survey data. Samasource requires applicants to complete an intake survey detailing demographics, earnings, employment, and educational attainment. It also surveys alumni to estimate their current earnings, labor force participation, and educational attainment; but only about 32% of those sampled can be reached and agree to complete the survey. ImpactMatters calculated a 42% rise in earnings among beneficiaries on the basis of the Post Samasource Survey, with adjustments to account for the low contact and completion rate.

The 42% increase in earnings is a plausible number. The most relevant randomized controlled trial from the literature showed that TVET increased total earnings by about 81% above baseline. This treatment effect was significantly larger than the median TVET treatment effect. Many TVET programs fail to demonstrate significant treatment effects.

Longevity (3 years). The duration of benefits studied in Samasource’s internal evaluation is three years. The internal evaluation shows strong evidence of growth relative to baseline using a reflexive comparison. Despite the accelerating trend in earnings growth over the evaluation, it is inappropriate to forecast increasing impact over time beyond the support of data. Many factors may affect the ability of agents to obtain high-paying employment (which Samasource refers to as the “formal sector”) over time, and it is premature to claim that the differences between Samasource trainees and comparable alumni would tend to widen, rather than attenuate, over time. However plausible the argument, current empirical evidence does not support further increases (or decreases) in counterfactual impact over time.

From a theoretical perspective, the duration of benefits is difficult to predict. The market failure that confronts unemployed youth is an information problem. Samasource resolves that information problem by giving prospective employers concrete information about the performance of specific employees at a reputable BPO firm. The information problem is effectively solved after the agent participates in Samasource’s programs for a certain amount of time.

The counterfactual case, meaning what would have happened in the absence of Samasource, relies on the duration of the job search. Once the individual finds employment at a reputable firm other than Samasource, the same information problem is resolved through the market. The duration of benefits, meaning the difference between earnings after Samasource and earnings without Samasource, depends on how earnings evolve over time. How long do individuals require to find their first job without Samasource? Does the employment gap between Samasource alumni and the counterfactual case persist, and if so, for how long? Does the wage differential between Samasource alumni and the counterfactual case persist, and if so for how long? Absent quasi-experimental evidence, we follow the three years from Samasource’s internal evaluation.

Baseline income ($954 USD per participant). The baseline income in the cost of impact analysis comes from the Samasource entry survey.

Participant time costs ($39 per participant). The majority of participant time costs are compensated at the delivery center wage. Participants are not compensated for time to complete screening surveys, time to complete Samasource Training or for time to complete Post Samasource Surveys. The value of participants’ time contributed is estimated at the baseline monthly earnings, since the majority (well over 95%) of time contributed occurs prior to the increase in wages that occurs when agents are hired at a Samasource delivery center. Participants spend two weeks of time in Samasource training and complete a handful of short surveys, most of which take less than ten minutes. In total, we estimate 81 hours of total uncompensated time spent by agents.

Discount rate (5%). Impact Audits use the standard World Bank discount rate of 5% for impacts on beneficiaries. The sensitivity analysis below shows the same financial ratios with more conservative (present-biased) discount rates of 10%, 15% and 50%.

Costs of partner organizations. Samasource operates with charitable contributions but no partner organizations in an operational sense. Instead, the commercial operations of Samasource are its internal operational partner. Financial ratios such as the Simple Cost of Impact (SCI) and Donor’s Cost of Net Impact that usually report the nonprofit’s cost excluding commercial, nonprofit and official partners. In the case of Samasource, we exclude non-charitable revenues from the estimate of Samasource costs for the Simple Cost of Impact and Donor’s Cost of Net Impact, but not the Cost of Impact.

Table 16. Metadata and Sources for Cost of Impact

OutcomeTotal earnings
Year audited2017
Year of analysis2015
Base year (first year of cost data)2014
Timeframe for cost data2014-2015
Participant data included2012-2015
Source of impact prediction

Adoho F, Chakravarty S, Korkoyah DT, Lundberg M, Tasneem A. The Impact of an Adolescent Girls Employment Program: The EPAG Project in Liberia [Internet]. The World Bank; 2014 Apr [cited 2016 Oct 19]. Available from:

Attanasio O, Kugler A, Meghir C. Subsidizing Vocational Training for Disadvantaged Youth in Colombia: Evidence from a Randomized Trial. Am Econ J Appl Econ [Internet]. 2011 Jul [cited 2016 Oct 19];3(3):188–220. Available from:

Card D, Ibarrarán P, Regalia F, Rosas-Shady D, Soares Y. The Labor Market Impacts of Youth Training in the Dominican Republic. J Labor Econ [Internet]. University of Chicago PressChicago, IL; 2011 Apr [cited 2016 Oct 20];29(2):267–300. Available from:

Cho Y, Kalomba D, Mobarak AM, Orozco V. Gender Differences in the Effects of Vocational Training Constraints on Women and Drop-Out Behavior [Internet]. Washington DC; 2013 [cited 2016 Oct 19]. Report No.: WPS 6545. Available from:;sequence=1>/p>

Source for cost dataSamasource’s audited financial statements for 2014 and 2015, shared privately with ImpactMatters

Cost of Delivery

Samasource is a social enterprise, accruing both charitable and commercial revenue in a nonprofit entity. The cost of delivery calculated here strips out commercial revenues and non-core program expenses from the total expenses of Samasource. Samasource costs are estimated over a three-year trailing average. The cost of delivery in each year is a net expense line, beginning with total Samasource expenses on the consolidated financial statements. Commercial revenues are deducted from total expenses. Allocated expenses of non-core programs, including both direct program costs and allocated overhead, are also deducted from total expenses. The remainder is the net expense of core Samasource program delivery.

The cost of delivery does not distinguish between wages paid to workers and any other category of expenses, since the wage bill supports activities that generate commercial revenues. It makes no attempt to distinguish capital investments or fixed costs, such as investment in SamaHub, that might be necessary investments for future productivity gains. Very simply, it estimates the difference between the expense of running Samasource core programs and the commercial revenue generated, and treats the shortfall as the cost of delivering on-the-job training to beneficiaries. The average cost per beneficiary over three years is $1,800.

Table 17. Summary of Samasource’s Costs

A. Total expenses, year 2015Organization$11,120,466
B. Commercial Revenue, year 2015Organization$5,348,710
C. Allocated costs of non-core activitiesOrganization$2,211,632
D. Expenses net of commercial revenue, year 2015Organization$3,560,124
E. Opportunity cost of beneficiary time, year 2015Participant$30,640
F. Total expenses, year 2014Organization$10,859,794
G. Commercial Revenue, year 2014Organization$5,969,169
H. Allocated costs of non-core activitiesOrganization$1,527,156
I. Expenses net of commercial revenue, year 2014Organization$3,363,469
J. Opportunity cost of beneficiary time, year 2014Participant$56,026
K. Total expenses, year 2013Organization$7,399,958
L. Commercial Revenue, year 2013Organization$3,053,897
M. Allocated costs of non-core activitiesOrganization$828,534
N. Expenses net of commercial revenue, year 2013Organization$3,517,527
O. Opportunity cost of beneficiary time, year 2013Participant$57,687
P. Average gross expenses, 2013-2015Organization$9,793,406
Q. Average net expenses, 2013-2015Organization$2,192,635
R. Average aggregate beneficiary cost, 2013-2015Participant$48,118
S. Average Cost per ParticipantOrganization$1,761

Other Contributions

Participants’ time is the sole additional resource contributed, at an average of 81 hours per beneficiary and valued at the baseline annual wage of $954 (USD). No other official or charitable budgets were disclosed at the time of the audit, and ImpactMatters has no reason to believe otherwise.

Prediction of Impact

The impact of Samasource is the increase in wages after Samasource agents leave the program. Using a combination of data and modeling for missing data, we estimate the impact of the program as a 42% rise in earnings above baseline. The model fit is a second-best strategy, since no counterfactual estimates of the impact of a comparable intervention are known to exist as of this writing.

Only wages after completion of Samasource employment are considered. Since the agents are contributing their labor to Samasource commercial operations and receive wages for that work, ImpactMatters does not assume any implicit transfer takes place between agents and the nonprofit. Agents are assumed to be paid what they are worth. While it is true that Samasource agents typically receive a general increase in income when they begin working at Samasource, they are also likely more productive than they were prior to working at Samasource.

This view of Samasource’s impact differs from Samasource’s internal evaluation. Samasource refers to the increase in earnings from baseline to currently employed Samasource agents as “impact,” and monitors their consumption patterns to show evidence of poverty alleviation. Changes in income for current Samasource employees are not considered in this model.

Samasource’s impact is predicted using a simple linear function. The impact, I, is the average change in earnings in the year following completion of Samasource, above the average earnings at baseline. The baseline period is the screening period directly prior to hiring at Samasource. Total earnings includes all wage, salary and self-employment income. Average earnings at baseline, Y, are measured in local wages and converted to USD. The coefficient β is derived from a model using data from Samasource. The Pre-Samasource Survey is completed by all workers at entry. The Post Samasource Survey is conducted annually on a random sample of alumni, but only 32% of those can be reached and agree to complete the interview.

I = β * Y

Samasource reports impact using the full baseline cross-section and a subset of the endline sample. The endline sample includes alumni that may have been out in the workforce for any amount of time up to four years. Respondents who are enrolled in school are omitted from reporting earnings, and those that have dropped out of the labor force are omitted. Samasource reports the difference in wages between these two cross-sections as the impact of the intervention, with a small correction to adjust for how baseline wages evolved over time.

The ImpactMatters model of impact assumes instead that those that did not complete the endline survey observed zero growth in wages since the baseline period. For the 68% of those that not complete the endline survey, we assume their endline wages are identical to baseline wages. Since the rate of missing data in the second cross section is high (68%), the pattern of missing data is not at random, and data on covariates are unreliable, ImpactMatters did not attempt to impute missing values in the endline data.

Table 18. Cost of Impact Base Case Assumptions

Input parametersBase Case
Estimated impact of Samasource on subsequent average wages42%
Longevity of increase in annual income (years)3
Average baseline household income for participants$954
Include participant time costs in the calculation of average costsYes
Discount rate5%
Consider operating expenses as program costYes

Table 19. Summary of Impact

SpecificationsBase Case
Average Increase in Total Earnings (1 Year Post Intervention)$400
Average Increase in Total Earnings (Cumulative)$1,500
Average Cost per Participant$1,800

Sensitivity Analysis

This section shows how sensitive the key financial ratios are to changes in the assumptions detailed above. In the sensitivity analysis, individual assumptions from the base case are changed and the key financial ratios are recalculated.

Table 19 (above) presents the headline estimates of impact, cost and cost of impact (COI) in the base case. The base case assumes that the impact is the same for the first three years, the duration of impact studied in the Post Samasource Survey. Cash flows are discounted at 5% per annum. TVET programs have not been evaluated in randomized controlled trials with long-term follow-up.

Table 20 presents the same financial analysis, using a range of discount rates. A discount rate of zero suggests that investors are infinitely patient, having no preference for impacts that occur today versus in the future. Some evaluators and governments, particularly in developing countries, commonly use a 10% discount rate. A 15% discount rate implies that a benefit that accrues five years in the future is worth only half as much as one that accrues now. A 50% discount rate implies a world of extreme uncertainty where an impact three years away is worth only one-third of the same impact today.

Table 21 presents the same financial analysis, using a range of longevity of benefits. This table presents alternate cases where the benefits provided by the Samasource program persist longer than the observed three-year period of benefits.

Tables 20 and 21 provide different specifications for the cost-effectiveness of the nonprofit, including:

Cost of impact (COI) is the ratio of total impact to total costs. Total costs include the commercial expenses of Samasource. All sales, general and administrative expenses are included. Only in 2015 did Samasource break out impact sourcing from other activities, and we include both the impact sourcing and non-impact sourcing activities in this calculation. When the COI is greater than 1, the value of impacts is greater than costs using the 5% discount rate. When it is less than 1, the discounted impacts are worth less than the costs invested.

Simple cost of impact (SCI) is a ratio that compares identical benefits as in COI to the charitable revenues of Samasource. Donor’s cost of net impact (DCNI) is a ratio that compares charitable revenue to the net impact on beneficiaries. Beneficiaries complete an estimated 81 hours of surveys and training that are not compensated, though Samasource does cover transaction costs (transportation and meals) during the training period. The value of that time is deducted from the impact. For other nonprofits, any resources, fees or effort contributed by beneficiaries would be deducted from cost to calculate this ratio. The payback period (only in Table 18) is the first year in which the up-front investment of costs is equal to impact. The payback period is calculated using the assumptions in the Cost of Impact ratio.

The Social Rate of Return (SRR) is equivalent to the project internal rate of return using the impact to beneficiaries as the return of the project. It represents the value of the discount rate that causes the net present value of impact and costs to equal zero, or the highest hurdle rate that the project could meet.

Table 20. Cost of Impact Sensitivity Analysis to Discount Rate

Discount rate (Base Case with sensitivity to alternative discount rates)a
Assumed discount rate:b0%5%*10%15%50%
Cost of Impactc0.670.640.610.580.47
Simple Cost of Impactd0.680.650.620.600.48
Donor’s Cost of Net Impacte0.660.630.600.580.46
Payback Periodf4556..

Table 21. Cost of Impact Sensitivity Analysis to Length of Benefits

Length of Benefits (Base Case with sensitivity to alternative length of benefits)a
Assumed length (years):b3*51020Perpetual
Cost of Impactc0.641.011.812.9112.75
Simple Cost of Impactd0.651.031.852.9813.03
Donor’s Cost of Net Impacte0.631.011.822.9612.59
Social Rate of Returnf-30%6%25%28%..
  1. The base case uses a discount rate of 5%, zero attenuation of benefits and a 3-year horizon.
  2. Length of benefits refers to the final year of analysis for which costs and benefits are considered; regardless of whether evidence supports the continuation of benefits.
  3. Cost of Impact refers to the ratio of total impact to total cost; a number greater than one indicates the impact is greater than the cost of investment.
  4. Simple Cost of Impact is the ratio of impact to charitable funds alone, excluding participant costs, earned revenue and partner organization costs.
  5. Donor's Cost of Net Impact is the ratio of impact, net of beneficiaries' contributed costs, to charitable funds alone, excluding earned revenue and partner organization costs.
  6. Social Rate of Return, akin to the internal rate of return, is the discount rate that causes the Cost of Impact Ratio to equal exactly 1, meaning discounted impacts have the same present value as cost.


Table 22. Cost of Impact Scenario Analysis

Scenario AnalysisBase CaseScenario 2Scenario 3Scenario 4
Cost of Impact0.640.730.310.39
Simple Cost of Impact0.650.740.320.40
Donor’s Cost of Net Impact0.630.720.300.37
Social Rate of Return-30%-22%-58%-52%

Base Case

The base case uses evidence from the Samasource follow-up study, which is a sample of Samasource alumni. It estimates the difference between earnings at baseline and endline, but it cannot distinguish between the change that did occur among beneficiaries and what would have happened in the absence of the project. If Samasource had reliable evidence from a study with randomized assignment or a quasi-experimental design, the audit would have used that impact estimate instead.

In the base case, the estimated increase in earnings is derived from Samasource follow-up survey evidence. Samasource’s impact calculations exclude observations from the follow-up cross-section when participants cannot be reached or refuse to take follow-up survey. This practice, known as listwise deletion, is biased when contact and refusal rates are correlated to earnings. Under certain circumstances, data imputation can resolve this type of problem of missing data, but not in this case. Imputing endline earnings data from baseline correlates was not feasible; and the pattern of missingness was not at random (MNAR), in which case data imputation is biased and inefficient.

The key assumptions in the base case that shape the estimate of impact are as follows.

  1. The earnings of non-respondents in the follow-up period are identical to earnings and employment at baseline. This is a conservative assumption, but it compensates for the likely direction of bias in estimated earnings in the follow-up period. Non-respondents, compared to respondents, are less likely to be employed and less likely to have increased earnings.
  2. The duration of benefits, three years, is the same as the longest period studied in the follow-up survey.
  3. The model used to estimate the change in earnings from baseline to follow-up is a random-effects, ordinary least squares regression. More elegant approaches, such as a Heckman two-stage regression of employment and earnings, were not possible with the data available.

Scenario 2

In scenario analysis, we conduct identical financial analysis using different treatment effects, with the identical assumptions about discount rates and the duration of benefits. These scenarios are extrapolated from the best available counterfactual studies of worker training (TVET) programs as of this writing. In the most comparable study using a randomized controlled trial, the Liberia EPAG program, average earnings increased by 80%.28 Scenario 2 uses the treatment effect of the Liberia study, adjusting for the different per capita costs of the programs and per capita earnings in each country. Rather than assume that the Liberia program and Samasource have identical impact on wages, it assumes that Samasource has the same return on investment as did the Liberia EPAG program. The absolute increase in earnings in Liberia is TE1­W1, where the treatment effect, TE1, is a percentage of the average earnings, W1. The ratio of impact to cost, TE1­W1 /C1, is assumed equal for Samasource, TESW­S /CS. The charitable cost per beneficiary and baseline wages for Samasource are known, enabling us to solve for the treatment effect. Rearranging terms, we show a predicted treatment effect of 36%.

TES=(TE1W1) / (C1*CS/WS)

The treatment effect from the Liberia EPAG study is far larger than what is commonly achieved in TVET programs.28 TVET programs generally have lower treatment effects on most variables, such as the improvement in each beneficiary’s chance to obtain employment, chance to obtain formal sector employment, chance to obtain self-employment, total earnings, self-employment earnings, hourly wages, and average time spent working per week. The findings of the Samasource follow-up study are much larger than treatment effects reported in the impact evaluation literature on TVET programs. Although the Samasource follow-up study is a pre-post comparison with serious limitations due to attrition, the rise in earnings from baseline to post-Samasource employment is larger than the treatment effect calculated in the Liberia EPAG study.

The estimate of 80% is plausible given the findings from the Post Samasource Survey. Among respondents, 85% reported favorable outcomes (including work, education or both). However, the contact rates were low. The first-year impact reported by Samasource is not the same as the impact calculated in this audit. The first-year impact is the wage increase when agents begin to work at Samasource. The impact calculated by Samasource is 184%, but 41% of the gains occurred in the first year when Samasource agents were employed at Samasource. Samasource could, in theory, raise those wages by fiat; but there is no evidence whether that would raise expected earnings after agents leave Samasource.

Even if the first-year impact of Samasource is treated as the baseline income for the study, Samasource reports a change in earnings that is greater than the 80% cited in the external literature. The figure of 184% is also derived from the subset of Samasource alumni that can be contacted and agree to share their employment and earnings data with Samasource. Even if the responses to the Post Samasource Survey are systematically biased upward, the predicted impact of 80% is believable. The true test of Samasource’s impact should be a quasi-experimental design with treatment and control groups, and not a pre-post comparison for Samasource agents.

Samasource’s average cost is substantially higher than in the Liberia EPAG program; but Samasource costs are calculated as the total expense of Samasource per beneficiary, which includes all training programs and the cost of delivering BPO service to Samasource clients.

The findings from scenario 2 are more promising than in the base case. The simple cost of impact and donor’s cost of net impact ratios both drop below 1. Using the base case assumptions of a 5% discount rate and a three-year duration of impact, the impacts do not outweigh the cost of investment. The negative social rate of return (-22%) shows that only investors with sharply negative hurdle rates would find the project passes a cost-benefit analysis.

Scenario 3

Scenario 3 estimates the treatment effect from a meta-analysis of many different TVET programs focused on worker training.28 The reported effect size is a standardized effect size, which effectively compares studies on the basis of the standard deviation of earnings, rather than comparing them on a dollar-for-dollar basis. The purpose of the meta-analysis is to characterize whether TVET programs typically find favorable effects even with different experimental designs and economic settings.

Adoho, Chakravarty, Korkoyah et al. report a Hedges’ g statistic of 0.127 for the effect of TVET programs on earnings. Hedges’ g is a standardized effect size, which normalizes treatment effects by the standard deviation of the outcome of interest at baseline. Since we were able to obtain the mean and standard deviation of the Samasource follow-up survey sample’s earnings at baseline, we can calculate the effect size equivalent to the Hedges’ g statistic. That treatment effect (21% of baseline earnings) is a plausible effect size, and smaller than what the Post Samasource Survey suggests. But this scenario is based on treatment effects of other TVET programs, in other countries, that have less in common with Samasource.

The financial ratios derived from Scenario 3 are much less optimistic. They suggest that in the first three years, discounted at 5%, the impact of charitable funds invested in Samasource is just 31% of the initial investment, with slight adjustments depending how we account for the value of beneficiaries’ uncompensated time. The social rate of return on the project is -58%.

Scenario 4

The fourth scenario looks at a much smaller cross-section of randomized controlled trials than the meta-analysis. Using only the studies in Liberia, Colombia, Dominican Republic and Malawi, we take the median impact, or 26% of earnings at baseline. Rather than attempt to precisely estimate the impact of Samasource, it shows only the median treatment effect of programs somewhat similar to Samasource, estimated with different experimental designs and in different economic contexts.

This methodology has several important flaws. First, the ratio of cost per beneficiary to earnings at baseline differs very widely in these studies. The Liberia study has the highest cost per beneficiary and the lowest earnings at baseline. The Dominican Republic study has the highest earnings at baseline and the lowest cost per beneficiary. Perhaps unsurprisingly, the effect of the Liberia study is the largest and is significantly different from zero. The Dominican Republic study is small and not statistically significant.

The cost efficacy ratios derived from Scenario 4 are consistent with the findings derived from Adoho, Chakrabarty, Korkoyah et al.’s meta-analysis, with impacts about 39% as large as the cost of implementing the program, and the social rate of return is -52%.

Externalities and Displacement

Negative Externalities High importance
Positive Externalities Moderate importance

With any vocational training program, there is significant concern that the observed impact may be the result of displacement of other workers.

Measuring displacement is particularly challenging. Typical impact studies are well equipped to capture the impact on participants, but not to understand what happened to non-participants as the result of the program. One well-conducted study of displacement found that the benefits of a job placement program in France were almost entirely offset by displacement of other workers not included in the program.

We do not have the data to estimate the extent of Samasource’s displacement, although some evidence is presented in Internal Evaluation above. However, displacement adds a substantial element uncertainty. Donors should consider the possibility that most or all of the impact of the Samasource program could be offset by displacement of other low-income workers.

Displacement Negative, high importance

There are three potential channels of displacement. Consider the unskilled labor market in Nairobi: if we assume that Samasource Training (not the Impact Sourcing) teaches job seekers to better search for and secure jobs, then the impact of Samasource could be completely offset by displacement of other workers that fail to find jobs. A leading study in France showed that a job coaching and training program led to nearly complete displacement of entry-level unskilled job seekers within the same metropolitan area.34 If the barriers to employment in the areas where Samasource alumni work have primarily to do with the efficiency of the job search, then Samasource alumni may simply have found a better search algorithm and not resolved an information problem for employers. Each dollar of impact may be offset by another worker that could not find work.

Samasource alumni may continue conducting digital microwork in Nairobi. If we assume that Samasource delivery center alumni specialize in digital microwork, then a positive supply shock of experienced workers should tend to increase the number of workers employed in digital microwork and lower their wages in a competitive market, all else equal. If this is the case, it is not clear that any future wage increases among alumni are not being offset by wage decreases among digital microworkers who did not participate in Samasource.

Finally, Samasource may compete with business process outsources employment worldwide. If we assume that business process outsourcing is a global market, then a positive supply shock from Samasource should result in lower prices and higher volume for digital microwork solutions, with ambiguous effects on labor demand globally in that sector. There is some anecdotal evidence from interviews that Samasource competes directly with firms in China.1

Labor Demand Positive, moderate importance

The market failure that Samasource solves is a feature of the local labor market where agents live and work. Due to poor information about workers’ abilities and education, employers are reluctant to hire workers and unwilling to pay a fair wage. The result is lower wages and lower employment, all else equal. If Samasource remedies the information problem for employers locally, then it follows that labor demand increases. The resolution of this market failure should result in positive externalities for employers and workers. Employers benefit from accurate information about worker abilities and education, making the job matching process more efficient for employers and job seekers. In markets where information is good and the supply of qualified workers is plentiful, it is easier to operate a business and to find work.

Context and Analysis


Surveys of agents show that over half of Samasource agents are female and over 90% are unemployed or underemployed prior to receiving work.2 Samasource does employ individuals who have received an advanced degree, such as a bachelor’s degree. However, Samasource staff state that in areas where they work, such as Kenya, such degrees do not necessarily lead to employment, particularly if the individual comes from a poor background and is not well connected. As the Kenyan government provides scholarships for students, this is often the case.

Program stage

Samasource is at the validation stage and is still making substantial changes to its program. These investments may increase the cost of operations compared to when Samasource has fully defined its operation and is primarily scaling.

Appropriate Metric

The Donor’s Cost of Net Impact is often a misleading indicator, when the delivery of the program requires other social resources (i.e. from the government or from other nonprofits) or when the impacts count benefits that would still accrue to the beneficiary if they transacted in the for-profit market (i.e. purchasing a vaccination from a private health provider).

However, with Samasource there are no other social resources, beyond donor’s contributions, that fund delivery of the program. The additional resources are provided by businesses that receive fair-market value for their costs. In addition, as the benefits do not count wages paid to Samasource agents, the impacts measured for Samasource do not count benefits that would still accrue to the beneficiary if they were employed by a for-profit BPO firm.1

As a result, the Donor’s Cost of Net Impact is likely the best metric to assess Samasource. The Donor’s Cost of Net Impact shows a positive impact to cost ratio within three years at a 5% discount rate.


1A caveat: We do not know what happens to workers who are employed by for-profit BPO firms. It is possible that for-profit firms generate a similar level of impact, in terms of increased future wages, as Samasource generates. However, Samasource is optimizing toward future worker success as opposed to for-profit BPOs who are optimizing for profit. There is anecdotal evidence that suggests Samasource better positions workers for success, through more training, better pay and more support. Furthermore, if Samasource is truly targeting a poorer population than for-profit BPO firms typically employ, the future wage increases delivered by Samasource should produce greater utility than those delivered by for-profit BPO firms. See Business Process Outsourcing Market.

Quality of Monitoring Systems


Samasource has high-quality systems for monitoring activities, targeting, engagement, feedback and outcomes. Although there are some areas for improvement, Samasource’s monitoring systems are, on the whole, well designed and thorough, and indicate that Samasource is consistently delivering a high-quality program to its participants.

The main components of the monitoring systems used by Samasource are:

  1. Targeting: Impact scores are calculated for prospective Impact Sourcing agents and Samasource Training trainees based on screening survey data, and those with the highest scores are entered into the program.
  2. Impact Sourcing at delivery centers: Workflow management, on-the-job training and quality assurance and agent activity tracking are all facilitated through SamaHub, Samasource’s custom project management and database platform. Data on SamaHub are subsequently accessed and analyzed using Looker, a business analytics tool.
  3. Samasource Training at training centers: Lesson planning, classroom activity tracking, attendance and trainee performance are all managed on Kannu, Samasource’s online platform of choice for delivering the Samasource Training course.
  4. Pre- and post-program surveys: Samasource relies on survey data collected at baseline and after participants have exited the program as the basis of pre-post comparisons to estimate impact.
  5. Payroll tracking: Payroll data are submitted every month by partner delivery centers and Samasource conducts audits every quarter to verify that this data are accurate and that agents are being paid fair wages.

In order to conduct this analysis, ImpactMatters interviewed senior management and mid-level managers, Samasource Training instructors and Samasource line-level workers, visited one Samasource-managed delivery center and three partner delivery centers, visited one Samasource Training program, and reviewed 110 documents and datasets provided by Samasource.

Data Type Credible Actionable Responsible Transportable



Samasource has systems in place to collect data on its own activities in a credible and responsible manner. Activities data are regularly summarized and made available to decision-makers in the organization to take remedial action as necessary, such as ensuring partner delivery centers’ compliance with fair wage requirements. Samasource collects activities data that are linked to its theory of change and has demonstrated transparency both in its disclosure of internal documents to ImpactMatters and its public communications.


The Impact Sourcing component of Samasource’s program involves the following activities: partnering with delivery centers or managing its own delivery center; targeting agents; hiring agents; providing basic training, project-based training and ongoing training for agents; winning client contracts, scoping projects and coordinating with delivery centers; paying agents a living wage; and providing impact programming such as training sessions in financial literacy and soft skills.

The Samasource Training component of Samasource’s program involves the following activities: partnering with training centers; targeting trainees; enrolling trainees; providing ten-day intensive training; and placing graduates into one of three employment paths (employment with Samasource, with Samasource’s corporate partners, or in online marketplaces).

Samasource monitors these activities using the following systems:

  • Partner Certification Program: Spreadsheet-based system for certifying prospective partner delivery centers.
  • Impact scores for candidate screening: Scoring system used to target eligible agents.
  • Impact scores tracking repository: Updated quarterly with impact scores of new agents.
  • SamaHub: Samasource’s proprietary task management and database platform tracks on-the-job training for Samasource agents.
  • SamaHub: Used to track project workflows.
  • Monthly payroll information: Submitted to Samasource by partner delivery centers and used to update a payroll data tracker in real time.
  • Quarterly payroll auditing: Used to ensure that delivery centers are meeting Samasource’s requirements for minimum wages for agents.
  • Participation tracker for impact programming: Tracks agents’ participation in impact programming sessions delivered at various delivery centers.
  • Training partner assessment tool: Systematically determines the eligibility of prospective partner training centers.
  • Applicant screening form: Targets and scores prospective trainees and is closely aligned with the impact score for Samasource candidates.
  • Kannu: Samasource Training’s online learning platform for classroom activity management.

Placement services and placement tracking: On a monthly basis, Samasource tracks the number of recommendations it makes for trainees to attend job interviews, the number of interviews actually attended, and trainees’ interview outcomes.


The data that Samasource collects capture the essence of the activities it seeks to measure. Using a combination of online and offline spreadsheet-based tools, custom-built database software and pre-fabricated software as a service (SaaS), Samasource tracks activities related to selecting partner delivery centers and training centers, targeting agents and trainees, managing workflows on client projects, managing classroom activity, providing complementary impact programming for agents and placing Samasource Training graduates into employment paths.

Samasource uses standardized data collection instruments to track its activities. Most data collection is performed internally by a core team of managerial staff that refers to standard guidelines to ensure they are collecting data reliably. The natural exceptions are data collected from applicants for targeting purposes: prospective agents and prospective trainees provide these data themselves on short, user-friendly, self-explanatory forms.

Samasource surveys are largely self-administered, which helps minimize response bias as survey respondents feel less obligated to give answers they believe to be more socially acceptable or that will please the enumerator. The exception is quarterly payroll auditing, which Samasource Field Managers conduct over the phone with a randomized list of agents. Samasource minimizes potential bias by standardizing call scripts and emphasizing in payroll auditing guidelines the importance of confidentiality and conflicts of interest with the partner delivery centers in question. All Samasource survey questions are also phrased neutrally and thoughtfully, so as not to systematically encourage respondents to answer in certain ways. Furthermore, ongoing data review facilitated by business analytics tool Looker helps ensure activities data are free from systematic error.


Data dashboards on Looker can be created whenever necessary. These dashboards filter real-time activity data along customizable dimensions, allowing the Samasource team to create ad-hoc reports that shed light on specific aspects of program progress.

Each month, partner delivery centers submit their center’s payroll information, which are incorporated into Samasource’s payroll data tracker in real time. Payroll auditing is performed on a quarterly basis, after which data are compiled and analyzed, and processes put in place for remediation where necessary. Samasource’s Managing Director and Vice President of Professional Services are responsible for the escalation of continued non-compliance from delivery centers. Similarly, partner delivery centers are tracked to assess their adherence to targeting requirements in the recruitment of agents. Quarterly reports are generated and used as the basis for discussions between the Samasource Impact Operations Manager and individual delivery centers. These quarterly reports are also shared with other members of the senior management team, keeping them apprised of any escalation points and corrective action planned.

Samasource’s theory of change rests on the successful mitigation of several risks: the risks that partner delivery centers and training centers have low quality of delivery and do not follow Samasource impact requirements in targeting, that the demand for managed BPO solutions from end users falls off, that Samasource and Samasource Training equip agents and trainees with non-transferrable skills, that partner delivery centers pay agents less than a living wage and report false payroll information to Samasource, and that local demand for trained digital microtask agents slackens. Samasource collects ample activities data to mitigate these risks.

Staff at all levels, from members of the Impact Team to the Samasource Managing Director and CFO, have the ability to access Looker data and generate the reports they need. Responsibilities for data analysis, data reporting, escalation of cases and resolution of cases are clearly defined for the members of the managerial team.


Looker and SamaHub enable many organizational efficiencies, including real-time data processing, centralization of multiple streams of data and automation of data reports. Samasource also uses enumerators only when necessary (such as in payroll audit calls), preferring to use self-administered surveys whenever possible, thereby reducing the data collection burden on staff and even on respondents. The self-administered surveys and the payroll audit call are short, well edited and contain no extraneous material.

Payroll audit phone calls are conducted to verify the accuracy of payroll data submitted by partner delivery centers each month and to ensure that agents are being paid fair wages. In order to avoid conflicts of interest and avoid pitting agents against their respective delivery centers, Samasource Field Managers conducting the phone calls are instructed not to inform agents that they have access to their salary information from the delivery centers. Samasource maintains a code of ethical data collection and dissemination in the gathering of other activities data as well, as summarized in a white paper on its impact measurement methodology: “We encourage our workers’ honesty by upholding their privacy and requiring their consent to release any information with which they provide us.”6


The data Samasource collects tracks activities as they appear in the theory of change. Samasource does not collect data about extraneous activities.

Samasource shared with ImpactMatters standardized data collection instruments, data collection guidelines, raw data and data reports related to activities data. Samasource makes publicly available descriptive information about its activities and the monitoring systems it uses to track activities, as well as summary results from activity monitoring data. Summary results are published quarterly as Impact Scorecards5 and annually in Annual Reports.5



Samasource collects credible data on prospective agents and trainees using standardized, self-administered web-based surveys that are designed to compute impact scores based on applicants’ answers. Applicants are scored higher the more they match the target population characteristics outlined in Samasource’s theory of change. Samasource takes care to collect data responsibly, and has a track record of taking action based on targeting data. A notable example is Samasource’s use of quarterly targeting data as the basis for the decision to off-board those partner delivery centers that failed to prioritize recruitment of high-need agents.


Samasource targets agents through a multi-step process. First, agents are referred to Samasource through local community-based organizations, government organizations, schools and other NGOs. Radio advertisements and flyers are also used to attract applicants. Second, applicants take a screening survey that generates an impact score, which Samasource uses to determine eligibility for the Impact Sourcing program. Third, applicants take a written skills test and have a face-to-face interview. Based on the impact score, written skills test and interview, Samasource accepts the applicant as an agent. Lastly, if a client's project is available at the time, agents undergo a week-long training (and sometimes a client interview) before taking the baseline survey and either working right away or commencing a working-while-training phase. If no projects are available, the agent is "benched" and may be called in at a later time.

The screening process and criteria for Samasource Training trainees are complementary to those for Impact Sourcing agents. Applicants for the training course are given a student recruitment score, which is the Samasource Training counterpart to the impact score. Partner organizations are responsible for recruiting, interviewing and enrolling trainees, and adhering to eligibility criteria in the process.

Data systems for targeting include:

  • Impact scores for candidate screening (Impact Sourcing): A scoring system used to target eligible agents.
  • Impact scores tracking repository (Impact Sourcing): Updated quarterly with the impact scores of new agents.
  • Applicant screening form (Samasource Training): Targets and scores prospective trainees using a student recruitment score.
  • Student recruitment scores tracking repository (Samasource Training): Updated with the student recruitment scores of applicants.


Samasource collects suitable data to track its targeting processes, including raw data from the screening surveys of prospective agents and trainees and summary reports generated from that data.

Standardized data collection instruments help ensure targeting data are collected reliably. The survey instruments are consistent across implementation sites (Kenya, Uganda, India and Haiti), yet also appropriately adapted to the national contexts. For instance, applicants must be above the minimum legal working age in their country and must be earning below the country-specific living wage as defined by the Fair Wage Guide.35

Samasource’s targeting surveys are self-administered web-based forms that contain questions phrased in a neutral way; both these characteristics help minimize response bias. Ongoing data review via Looker, Samasource’s business analytics tool of choice, aids in the early detection of systematic error in data collection.


Samasource uses clear and comprehensive scoring guidelines to assist staff in interpreting raw survey data and calculating impact scores in a consistent and reliable way for every applicant. How incoming agents score against impact criteria is summarized and reported quarterly to the management team.

The Samasource theory of change relies on the mitigation of the risk that partner delivery centers and partner training centers do not follow targeting criteria in the recruitment of agents and trainees. It also relies on the validation of the assumptions that incoming agents and trainees have high enough levels of English literacy and numeracy to be teachable and to benefit from Samasource on-the-job training or the Samasource Training course. Samasource collects the requisite targeting data to mitigate these risks and support these assumptions.

Samasource has a detailed strategy for dealing with cases where incoming agents fall short of impact criteria targets based on survey data collected: first, targeting data are analyzed and reported, cases are flagged for escalation, then remedial action is taken with problem partner delivery centers. Responsibility for these tasks is spread across the Impact Team, Managing Director, Vice President for Professional Services and the CFO, and each role is well defined. Interviews with the management team reveal this strategy has resulted in the off-boarding of non-complying partner delivery centers and the retention of only the most mission-aligned partners.


Both the surveys for identifying eligible Samasource agents and for identifying eligible Samasource Training trainees are very streamlined and pose little burden on respondents. The use of data analytics tool Looker also reduces the data collection and analysis burden on Samasource staff.

Samasource’s white paper on its impact measurement methodology states a commitment to upholding survey respondents’ privacy and requiring their consent to release any information.6


Samasource's theory of change specifies a target population that earns wages below the country-specific Fair Wage Guide, is mostly female, is unemployed or informally employed, typically has a high school education, and in Kenya, lives in a low-income area or informal settlement. By computing higher impact scores for applicants who most closely resemble the target population described in the theory of change, and lower impact scores for those least like the target population, and using those scores to directly influence participants' eligibility for the program, Samasource ensures that its targeting systems are consistent with the criteria outlined in its theory of change.

Samasource shared with ImpactMatters all requested documents related to targeting data, including data collection instruments, data collection guides and raw data. On its website and in annual reports, Samasource shares with the public its targeting methodology,36 characteristics of its target population and general descriptions of its participant population.5



Samasource triangulates engagement data collected and analyzed using automated systems such as on SamaHub, Kannu and Looker, with engagement data from agents’ follow-up surveys. Together, these data streams provide a credible picture of participant engagement that is in alignment with the Samasource theory of change. Data collection is responsible and data analysis reported in an accurate and timely way to facilitate decision-making. Samasource’s methodology for collecting engagement data is also shared publicly online.


Participant engagement at Samasource encompasses several ways in which agents and trainees interact with programmatic activities: agent and trainee attendance, agent tenure, trainee graduation rates, agent and trainee performance, agent participation in complementary impact programming, and trainee attendance at job interviews set up by Samasource.

Data systems related to engagement include:

  • Quarterly tenure of Impact Sourcing agents is tracked on Google Sheets and analyzed in Looker.
  • Agent participation in impact programming is tracked in a spreadsheet.
  • Agent follow-up surveys via SamaHub capture additional information about engagement.
  • Performance review and coaching forms circulated weekly in spreadsheets to assess agent performance.
  • Enrolment and graduation rates for Samasource Training trainees are tracked in spreadsheets.
  • Attendance and performance of Samasource Training trainees is tracked on Kannu, Samasource’s interactive online platform of choice for classroom management.
  • Job interview attendance of Samasource Training trainees is tracked in spreadsheets.


Samasource has strong data systems that directly capture all the ways in which participants are expected to engage in Samasource activities, from daily attendance to agent performance.

The bulk of engagement data are collected by automated systems, such as logged activity on Kannu and SamaHub, thereby reducing the chances of error. The remaining engagement data are collected using follow-up agent surveys, which are standardized and administered through SamaHub. The follow-up survey asks for tenure and average days per month and hours per day worked at the work center, income at the work center, training received at the work center, and indirect benefits (such as meals and transportation) received at the work center. The questions are straightforward and phrased in a neutral manner.

Once data are collected, for instance via SamaHub, they are accessed and analyzed on Looker. Ongoing data review on Looker gives staff the ability to catch systematic errors in data collection almost as soon as they arise. Furthermore, the same data points are collected through multiple channels: automated, objective systems as well as follow-up agent surveys. For instance, agent tenure and average hours per day worked are captured on both SamaHub and in follow-up survey responses, thereby enabling Samasource to triangulate its data and minimize bias.


Engagement data analysis and reporting are not only timely, but also dynamic, and are readily available to meet the needs of decision-makers in the organization. Agent tenure is compiled into a quarterly dashboard on Google Sheets. The dashboard also breaks down tenure by delivery center and summarizes tenure in annual averages. Besides quarterly and annual summaries, live information on agent headcount is also accessible at any time on Looker.

Each partner delivery center has its own process for quality assurance of agent performance. At one of the delivery centers ImpactMatters visited, agents upload to SamaHub a sample of their work every hour and an Interval Sample Report is generated, listing links to their work samples. The Quality Assurance Manager uses this report to verify the quality of task completion and each agent is given a daily quality score. This is supplemented by a performance review and coaching form circulated weekly in a spreadsheet. The form includes action points agreed upon by supervisory staff and the agents themselves. The fast turnaround time for reports allows supervisory staff to ensure action is taken to address underperformance.

Samasource’s engagement data address a number of assumptions and risks underlying its theory of change: Samasource faces the assumptions that its corporate clients are willing to entrust business processes to Impact Sourcing agents and that agents are interested in and benefit from impact programming, and Samasource faces reputational risks if key employers have bad experiences with Samasource alumni and Samasource Training graduates. Samasource collects engagement data to substantiate these assumptions and to mitigate this risk. For example, performance data from SamaDC are used to prove there is no difference between the quality of work product from Impact Sourcing agents and that of regular delivery center agents.

Interviews with key management staff demonstrate commitment to take action based on monitoring reports. For example, Samasource investigated the relationship between fluctuations in work volumes, program attrition and agent tenure using activity and engagement data in order to understand if attrition was voluntary or involuntary. Samasource subsequently improved its processes to minimize fluctuations in work volumes and reduce involuntary attrition.


Processes for obtaining engagement data are largely automated on SamaHub and Kannu, and much of data analysis is automated on Looker. Measuring attendance and work hours logged, for instance, requires almost no effort from agents and trainees as well as Samasource staff. The follow-up survey for agents takes approximately 25 minutes to complete, and in order to avoid disrupting their work, agents are given a two-week window to submit the survey.6

Engagement data collected on SamaHub, such as attendance, hours logged and follow-up surveys, are protected by two-factor authentication and roles-based permissions. Samasource also follows confidentiality and consent protocols for the collection and use of agent information.6


Samasource's theory of change requires that agents engage with the program in the following ways: agents have adequate attendance rates; agents complete tasks at least to the degree necessary to remain employed at Samasource (as non-performance is grounds for termination); agents' tenure at Samasource is long enough to constitute meaningful work experience in the eyes of future employers; agents participate in impact programming offered at the workplace; trainees attend a minimum number of trainings to benefit from the course; trainees complete assignments to the degree necessary to graduate; and trainees graduate from the course. The employment data Samasource collects tracks the various ways that participants are expected and required to engage with program activities. Samasource does not collect data about extraneous participant engagement.

Samasource has shared with ImpactMatters standardized data collection instruments, data collection guides, raw data and data reports for capturing program participants' engagement in its activities. Samasource publishes the number of active agents and enrolled trainees per quarter, as well as to date, in its impact scorecards and publishes the number of agents and trainees per year in its annual reports.5 Samasource also describes its methodology for engagement data collection in its publicly available white paper, "How and Why We Measure Impact at Samasource 2016."6



Samasource collects feedback from all its major stakeholders: Impact Sourcing agents, Samasource Training trainees, partner delivery centers, partner training centers and Samasource staff (not line-level workers). While it has demonstrated good organizational listening, Samasource could improve by formalizing communication processes and data collection with respect to feedback from partner organizations. It is important that information gleaned from partner organization meetings and correspondence be documented and analyzed systematically, especially as Samasource scales, in order to identify trends in partner organization feedback and opportunities for improving Samasource's operations and program model. Samasource could also benefit from providing regular group-based forums for feedback from agents at large. Existing feedback mechanisms are limited to twice-yearly and post-Samasource surveys or interviews with a small subset of the agent population; they do not capture the full scope of feedback from the entire agent population at the desired frequency.

Nonetheless, feedback data collection is performed responsibly and Samasource has proven that it regularly uses feedback data as the basis for making decisions, such as in the design of impact programming in response to needs voiced by agents.

One remaining area for improvement is transparency with respect to feedback data, as Samasource does not at present make publicly known what types of feedback data it collects and how they are collected.


Feedback mechanisms for agents and trainees:

  • Satisfaction survey: Anonymous bi-annual agent satisfaction survey.
  • Agent interviews: Interview with a subset of the agent population.
  • Impact programming sessions: Samasource collects agent feedback on select impact programming sessions.
  • Post Samasource Survey: The survey includes some feedback questions for alumni.
  • Town hall meetings: Samasource conducts monthly meetings with agents.
  • Focus group discussions: Staff conduct discussions with a subset of agents (samples not strictly randomly selected).
  • Walk-in: Agents can visit and discuss with HR assistants at the SamaDC.
  • Exit survey: Includes some feedback questions for Samasource Training graduates.
  • Post Samasource Training Survey: Includes some feedback questions for graduates.

Feedback mechanisms for partner delivery centers and training centers:

  • Quarterly business reviews: In-person quarterly business reviews with each partner delivery center’s management team.
  • Partner delivery staff: A full-time partner delivery staff member serves as the Account Manager and liaises with Samasource on all training, logistical and accounting issues.
  • Ad-hoc feedback: Meetings or email correspondence with training partners.
  • Debriefing: An in-person debriefing session between the training partner’s Instructor and Program Staff and Samasource’s headquarter staff at the conclusion of the course.

Feedback mechanisms for Samasource staff:

  • Staff Appraisals: One-on-one staff appraisals for Samasource Team Leads and Quality Analysts.
  • Weekly meetings: Weekly office-wide meetings at some delivery centers.
  • Regular check-in sessions: One-on-one biweekly sessions between Samasource Training trainers and senior trainers; biweekly sessions between senior trainings and their supervisor; monthly sessions between senior trainers and the program manager
  • Monthly meetings: Samasource Training monthly meetings.


There are a number of feedback channels for multiple Samasource stakeholders. Some channels lend themselves better to the generation and collection of feedback data than others. For example, the anonymous bi-annual agent satisfaction survey for line-level agents generates a wealth of data, including Net Promoter Scores that quantify the likelihood that agents will recommend Samasource to friends who are looking for work. By contrast, focus groups with agents are done on an as-needed basis on a non-random sample of agents and are not formally documented. This difference is to be expected because of the variety of objectives of collecting feedback data: while the agent satisfaction surveys aim to provide the information that managers need to improve core functions by asking specific questions about whether agents understand how their performance is evaluated and whether they are receiving the right training to do their jobs, the focus groups serve to "get a sense of the agent experience at [delivery centers] and create a participatory approach to designing pilot programs.”

In advance of quarterly business reviews with each partner delivery center, the Impact Sourcing team creates customized presentation slides that serve as the agenda for the review session. The slides show the impact metrics and agent satisfaction metrics relevant to the partner delivery center in question. Partner delivery center managers have the opportunity to give feedback during a designated Question & Answer session after the slides. Action items following the quarterly business review are noted in an email and circulated among the relevant parties. Samasource Training’s debriefing sessions with training partners could benefit from a similar approach; at present, there is no formal way of documenting feedback data collected from these debriefing sessions.

Samasource uses standardized instruments to collect feedback from agents and trainees, thereby increasing the reliability with which data are collected. Feedback data from partner organizations and staff tend to be collected informally and verbally in meetings.

Questions in Samasource’s feedback-related survey instruments are phrased in a neutral way that likely encourages unbiased responses from agents and trainees. The anonymous agent satisfaction survey concludes with two open-ended questions that are likely to reveal a greater breadth in responses beyond the systematic limitations of preceding multiple choice survey questions. In addition, before conducting agent interviews, interviewers are trained to take a number of measures to make agents feel comfortable, such as ensuring partner delivery center managers are not present and that agents who need translation services have access to them. Human Resources staff at SamaDC are also viewed as independent from the management team, which likely creates an environment conducive to honest feedback.

Lastly, multiple parties within Samasource have expressed a desire for an additional feedback mechanism for agents; in particular, it has been suggested that delivery center-wide town hall meetings be used not only for communicating strategy to agents but also as a forum for agents to voice their feedback at least quarterly. This is corroborated by interview evidence from a management staff member as well as feedback data from agents via the bi-annual agent satisfaction survey. The bi-annual survey may not be a frequent enough opportunity for feedback, while the agent interviews and focus groups, which are carried out with only a subset of all agents, may not be an inclusive enough opportunity for feedback. For these reasons, Samasource’s feedback systems do not satisfy the Credibility criterion completely.


Reports on agent satisfaction are created on a quarterly and bi-annual basis and made available to the Impact Team and other key management staff. Reports presenting results from the Post Samasource Survey are produced annually by the Impact Team and circulated internally.

Samasource’s theory of change depends on the assumption that agents are interested in and benefit from impact programming offered at delivery centers. Samasource does indeed collect feedback data to validate this: staff survey agents at the conclusion of impact programming sessions, though this is not done after every session in order to avoid survey fatigue. For instance, after a financial literacy training session, agents were invited to rate the overall session, the speaker’s level of engagement, and list their favorite and least favorite topics, areas of struggle in financial discipline, and suggestions for financial topics to be included in future sessions.

Samasource has a strong history of designing impact programming sessions in response to feedback data from agents. The Samasource Managing Director cites Health Week at SamaDC as one such example. Other members of the Samasource leadership have also voiced a commitment to systematically reviewing and responding to feedback data.


The Impact Team, which is responsible for designing Samasource surveys, is mindful of the time burden and effort required from agents and trainees in responding to surveys. Effort is made to keep surveys streamlined and to avoid duplicative data collection.

Samasource's proprietary task management and database platform, SamaHub, keeps feedback data secure from misappropriation. A standard consent and waiver form is used to obtain informed consent from participants when Samasource needs to collect photo, video and interview content. Staff are also given detailed guidelines for ethical photography and videography before conducting field visits.


Feedback data are used to track all aspects of the theory of change, including satisfaction with pay and impact programming offered (Activities), agent tenure and skills gains (Process Metrics), and whether the most important perceived benefit of the Samasource experience is future wages, wages while at Samasource, the ability to pursue further education, or no benefit at all (Social Failures and Target Population). Samasource does not collect extraneous feedback data on programs that are not tied to its theory of change.

During the audit engagement, Samasource shared with ImpactMatters data collection instruments, data collection guides, raw data and data reports related to feedback data. Samasource publishes videos of agent stories, which are based on interviews, photos and videos taken from a select few agents.5 However, Samasource does not publish its methodology for collecting feedback data. In order to demonstrate full transparency, it is recommended that Samasource provide a public description of what types of feedback data it collects and how they are collected.



Samasource has responsible systems for collecting outcomes data on program participants and has proven its willingness to take action based on outcomes data. However, outcomes data may be systematically biased at the low rates at which Samasource alumni and Samasource Training graduates can be successfully contacted to complete surveys after program exit. Samasource is also measuring some intermediate outcome metrics that are non-essential to its theory of change, as constructed by ImpactMatters. It is advisable to allocate organizational resources elsewhere, such as to the strengthening data collection systems for more essential outcomes.


According to the Samasource logical framework, Samasource uses the following systems to measure process metrics, intermediate outcome metrics and outcome metrics:

  • Baseline survey: Samasource conducts a baseline survey of agents at entry.
  • Follow-up survey: Samasource administers a follow-up survey to agents four to six months into their tenure at Samasource.
  • Post Samasource Survey: For a randomized sample of Samasource alumni, a post Samasource survey is conducted.
  • Post Post Samasource Survey: For those who completed the Post Samasource Survey but were in school at the time, a second survey is conducted to follow up.
  • Training completion: Records on training completion from the Professional Services Group and SamaDC delivery teams.
  • Trainee headcount and graduation rates: Data collected by Samasource Training personnel.
  • SamaHub: Samasource’s custom task management and database platform tracks performance of work.
  • Household surveys: Enumerators conduct in-person household surveys every three years with random subgroups of current agents to collect more detailed information not captured in the baseline survey and give Samasource a greater understanding of the local contexts in which it works.


Samasource’s logical framework clearly identifies sets of process metrics, such as the number of active agents at Samasource; intermediate outcome metrics, such as the increase in self-report ICT skill development post-Samasource; and outcome metrics, such as the percentage of Samasource alumni formally employed after Samasource. For each metric, Samasource has identified corresponding means of verification through either survey data, departmental records or SamaHub automated data.

Samasource ensures the reliability of outcomes data collection by using standardized data collection instruments and guidelines for both the Impact Sourcing and Samasource Training components of its program across all implementation sites.

However, outcomes data are at risk of containing systematic error. A random sample of Samasource alumni is generated for participation in the Post Samasource Survey, but just 40% of those in the sample can be successfully contacted and only 80% of those contacted complete the survey (or 32% of the original random sample). It is possible that alumni who could not be contacted were precisely those who, for instance, could no longer afford mobile phone charges, had to relocate because they continued not to find work in their area and lost their original contact numbers in the process, or those who have to share a phone with others. Outcomes data from the Post Samasource Survey may be biased upward if contactable respondents are more likely to be employed and earning higher wages than unreachable alumni.


Outcomes data reports are delivered on an annual basis as well as cumulatively over multiple years. They summarize alumni’s employment and earnings outcomes after Samasource, broken down by country and by the industries in which alumni work. A longer reporting cadence is appropriate given the staff effort required to collect post-Samasource data: unlike the self-administered web-based surveys used at baseline and follow-up, individual phone calls must be made in order to reach Samasource alumni.

Samasource collects outcomes data (specifically, baseline survey and household survey data) to validate the assumption underlying its theory of change that agents are interested in and benefit from receiving impact programming. Baseline and household survey data expose related problems that agents face, such as lack of credit and savings mechanisms, sanitation and hygiene issues and poor perceived self-efficacy, and help inform the design of complementary impact programming at delivery centers.

The Samasource management team is committed to taking action based on outcomes data. Outcomes data revealed that, though wages were positive during employment at Samasource, agents’ available earnings would quickly return to extremely low levels due to poor financial management. In response, Samasource rolled out financial literacy training sessions to minimize the barriers agents face to achieving the best outcomes possible.


The time burden on respondents for completing Samasource and Samasource Training surveys ranges from 10 minutes (the Post Post Samasource Survey) to 45 minutes long (the household survey). Current agents are given ample time to complete the follow-up survey to avoid disrupting their work.6

All baseline and follow-up surveys are self-administered and web-based, thereby reducing the data collection burden on Samasource’s own human and financial resources. In contrast, the Post Samasource, Post Post Samasource and Post Samasource Training Surveys require substantially more time and effort from staff because alumni and graduates have to be called at least twice if calls do not initially connect and over half of all calls dialed never connect. However, Samasource has made efforts to reduce staff burden as well as to improve the contact rate for surveys: in 2015, Samasource introduced a new feature in SamaHub that makes use of twice-yearly scheduled alerts to keep agents’ profiles up to date. Results from surveys conducted after the SamaHub improvements are not yet available.

Samasource takes adequate measures to ensure surveys for the collection of outcomes data are carried out ethically in terms of upholding confidentiality and being transparent about Samasource's objectives for collecting data.


Samasource targets an extensive list of process metrics, intermediate outcome metrics and outcome metrics in its logical framework. However, not all are well-linked to the Samasource theory of change, as constructed by ImpactMatters. Of greatest concern are the intermediate outcome metrics listed in Samasource's internal version of its theory of change: the percentage change, from program entry to 4-6 months into Samasource employment, in income, household expenditure, healthcare expenditure, own education expenditure, own savings, miscellaneous expenditure (discretionary income), food expenditure, remittances and rent expenditure. These intermediate outcome metrics are considered extraneous to the theory of change – as it appears in the present analysis – for two reasons. First, an increase in income while employed at Samasource is likely to lead to related increases in savings and in expenditure in a multitude of areas, and measuring these knock-on effects double-counts benefits unnecessarily. Second, the increase in income while employed at Samasource is not the primary benefit of the program; there are many other more efficient transfer systems to re-allocate money to a target population. Rather, the primary benefit of the program is the income gains for Samasource alumni post-Samasource. As these data are not being used to make routine management decisions but rather establish the impact of the program, spending resources measuring these extraneous outcome metrics is not advised.

Samasource shared with ImpactMatters all requested documents related to outcome data. Samasource is an industry leader in transparency with respect to its methodology for evaluating change in participant outcomes. Its methodology white paper and impact benchmark concept note6 fully disclose assumptions made, sample sizes and survey response rates, reliance on self-reported data, and its lack of a strong counterfactual. Samasource's most recent annual report5 also explicitly and sophisticatedly acknowledges the pre-post nature of its outcomes data and the small sample sizes used. Counterfactual average earnings over time are recognized as being merely "expected,” as opposed to credibly measured using a control group.

Learning and Iteration


Samasource makes systematic and continuous changes to its model on the basis of high-quality data. Iterations are subjected to testing and are adopted based on either a counterfactual test of impact, strong effect sizes, or both. Samasource’s iterations were adopted systematically, with recognizable components of the Plan-Do-Check-Act cycle.37 Samasource also sources, considers and tests iterations at frequent intervals. The distinguishing feature of Samasource’s Learning and Iteration is that it has demonstrated willingness to make difficult strategic decisions, such as closing down an unsuccessful program in order to substantially reimagine it.

Criteria Finding
Iteration is based on data Yes
Data are of high quality Yes
Iteration is systematic and periodic Yes


Samasource has adopted six iterations over the past three years. Three are considered “major” iterations, in that they are expected to have detectable downstream effects such as an increase in alumni wages post-Samasource, while one is considered a “minor” iteration because it is only expected to have detectable proximate effects like an increase in agent headcount. Finally, two iterations are considered “additive,” in that they are additional components to the intervention rather than modifications.

Though all six iterations were reviewed in assessing Samasource’s Learning and Iteration, additive iterations are considered to have less of a burden of proof and are therefore not included in the following written analysis.

Samasource has adopted three major iterations in the last three years:

  • Samahope: Samahope was a crowdfunding platform launched in 2013 that connected individual small donors to developing country doctors in order to provide healthcare to women and children in need.4 In the last quarter of 2015, Samasource made the strategic decision to merge Samahope with CaringCrowd, another global health crowdfunding platform. Samahope had cumulatively enabled the treatment of 16,917 patients by the time it was spun out.38
  • Samaschool USA: Originally “SamaUSA,” Samaschool began in 2013 in San Francisco, California (then Merced, California and Dumas, Arkansas), with the aim of providing low-income Americans with digital skills and job readiness training in order to increase incomes.5 Samaschool faced several design challenges, including fierce global competition in the market for traditional microtasking and nuances of the gig economy related to urbanization and culture. Samasource decided to wind down Samaschool as it was originally conceived, and is testing new program designs.
  • Impact programming: Impact programming addresses other barriers that agents face, such as poor financial literacy and low self-confidence in the workplace. These barriers ultimately prevent agents from achieving the best long-term outcomes possible, even as they receive work and training from Samasource. Samasource is currently rolling out impact programming sessions for agents during paid work hours (and for free during off-hours), such as financial literacy training, a week of health and wellbeing programming and a mentorship program.

Samasource has adopted one minor iteration in the last three years:

  • Samasource Delivery Center (SamaDC): Operating its own delivery center gives Samasource better visibility into delivery center operations and costs. SamaDC serves as a testing platform for staffing client projects entirely with agents that fully meet targeting criteria and for pilot programs before they are scaled up in other delivery centers. SamaDC also reduces Samasource’s reliance on a partnership-based model.

Samasource has adopted two additive iterations in the last three years:

  • Samasource Training: Samaschool Kenya was rebranded as “Samasource Training” in 2016 to recognize the integration of Samaschool Kenya and Samasource Kenya.39 Samasource Training is a funnel for microwork employment at Samasource; about 75% of Samasource microworkers in Nairobi are graduates from Samasource Training. The objectives of the integration are to observe how well the Samasource Training course prepares participants to perform on-the-job at Samasource, and to use Samasource’s direct contact with corporate clients to ensure the Samasource Training curriculum is responsive to market needs.
  • Tightening partnerships with delivery centers: In 2014, Samasource had about 12 partner delivery centers; at present, there are four. Samasource has also tightened requirements for partner delivery center eligibility using the Partner Certification Program (PCP) and by switching from quarterly to monthly payroll reporting. This allows Samasource greater control over mission-critical compliance with targeting and payroll requirements.

Data Quality High

The four iterations assessed for data quality are Samahope, Samaschool USA, impact programming and SamaDC. All but impact programming are supported by high-quality data, and thus the overall quality of data on which iterations are based is judged to be high.


Although Samahope was successfully enabling the treatment of patients in need, and doing so at low cost, monitoring data revealed that the numbers of patients treated was not increasing rapidly enough to reach the target of one million patients. In order to scale operations up, the senior management team began to explore partnerships with larger health-focused organizations, but learned that partnerships alone would not be enough to leverage the experience and established networks of these organizations to the extent necessary. Samasource therefore began considering fully spinning off Samahope to be operated by another organizations. Furthermore, the health outcomes that Samahope existed to improve were not aligned with the employment and earnings outcomes behind the Impact Sourcing and Samasource Training programs. Samasource used an intensive vetting process to assess the three health-focused organizations that were contending for Samahope. The three organizations were assessed in terms of mission alignment with Samahope and likelihood to scale the Samahope program, and Johnson & Johnson’s CaringCrowd was determined to be the best fit.

Samaschool usa

By mid-2015, Samasource had collected pre-post participant data from the first SamaUSA cohorts in San Francisco, Merced and Dumas in order to test its original digital skills and job readiness training program. Even though participants’ digital literacy levels and other learning outcomes had improved, the training was ultimately not translating into an increase in graduates securing digital job opportunities. Samasource found that participants were falling short of its internal targets for employment outcomes. Based on these underwhelming results, supplemented by feedback from key stakeholders, such as gig economy platforms and partner training centers, and third-party research on the online platform economy, Samasource made the decision to overhaul its SamaUSA theory of change. Samasource then conducted extensive research using data from three online work platforms (Upwork, O*NET and Freelancer) to identify the most viable occupational tracks for its U.S.-based trainees. This resulted in two distinct strategies to meet the needs of two trainee populations: one for trainees in urban areas and one for those in rural areas. It was determined that in-person services, such as driving for Uber, and social media marketing were best suited for urban participants, while low-skill, low-paying remote labor, such as customer service, or social media marketing for those already equipped with basic IT literacy, were better suited to rural participants. Backed by high-quality pre-post testing data and with well-informed new strategies in place, Samasource began to wind down the SamaUSA program as originally conceived.

Impact programming

Samasource has multiple impact programs at various stages of development in the pipeline, and has provided testing data from one program, financial literacy training. Samasource collected engagement (attendance) and feedback data from agents who attended a financial literacy training session. Feedback from agents was generally positive, with 87.5% of attendees rating the overall session as being good or excellent. However, Samasource’s goal with impact programs is to eventually affect outcome metrics such as health status, long-term employment and income gains, and these have not been tracked in iteration testing. Samasource did not test whether those who received financial literacy training also subsequently had better financial management and higher incomes. The sample size used was also very small: 24 agents attended the training and only eight responded to the feedback survey. Data quality behind the decision to scale up financial literacy training is therefore judged to be low.


At SamaDC, Samasource is able to pilot test various initiatives and measure their effects on overall operating costs and on quality of work product. Samasource conducts A/B testing by comparing SamaDC’s routine monitoring data to those of partner delivery centers, which serve as the control group. For example, SamaDC has been able to show the financial feasibility of offering a transportation stipend to agents. SamaDC has also shown that tasks previously considered beyond the scope of impact agents can indeed be done to the same quality as non-impact agents, as the latter are likely to be more experienced. Based on high-quality data from SamaDC and partner delivery centers, Samasource has made the decision to not only continue operating SamaDC, but also to open more delivery centers of its own and become mostly Samasource-owned-and-run by mid-2017.

Systematic Change Yes

Samasource’s four iterations were adopted through a systematic process in that they each underwent a Plan-Do-Check-Act learning cycle, wherein the iteration idea was sourced; tested; analyzed and summarized for decision-makers; and then accepted and implemented systematically.


Samasource sourced the idea for spinning off Samahope by examining internal monitoring data on process metrics (number of patients treated) and information gathered from potential health-focused partner organizations. Annual reports and quarterly Impact Scorecards demonstrated Samahope’s slow growth in process metrics,2 while a detailed and formal vetting process demonstrated which peer organization would be best positioned to oversee Samahope. With these findings in hand, decision-makers were able to enact the spinoff. Official communications were released publicly, announcing the merger of Samahope and CaringCrowd in a coordinated manner.4

Samaschool usa

Samasource was inspired to overhaul its Samaschool USA theory of change based on internal monitoring data from its San Francisco, Merced and Dumas sites; feedback from key stakeholders, such as gig economy companies and training partners; and two external studies on the online platform economy by JP Morgan Chase Institute and Intuit and Emergent Research. Staff also compiled slides to share internally the findings from two phases of research, which aimed to find alternatives to the original program. Equipped with results from iteration testing in San Francisco, Merced and Dumas, as well as well-researched program alternatives, Samasource made the decision to phase out the original program and to phase in the alternatives. It was evident from multiple staff  interviews that the decision was clearly communicated internally across the organization.

Impact programming

Samasource uses regular survey data at multiple time points (baseline, follow-up, triennial household survey) to directly inform the design of impact programs. It tested one of the impact programs, financial literacy training, by running a pilot session and collecting feedback data from attendees. These data were analyzed and reported in a presentation for internal use before the training program was scaled up in all the other countries of operation.


Based on internal monitoring data, Samasource found that their for-profit partners required careful monitoring to ensure that impact and payroll goals were being prioritized and not sacrificed in favor of profit. The management team determined that it was possible to achieve higher targeting effectiveness and payroll compliance if Samasource ran its own delivery center without a profit motive. Samasource has used SamaDC in A/B testing against partner delivery centers. The results of testing have been shared in raw datasets and data reports that compare SamaDC’s payroll and targeting performance to that of other delivery centers. These results have reflected favorably on SamaDC and support the management team’s decision to continue owning and operating its own delivery center, and to eventually open one in each country in which it is currently active. Detailed documentation of the decision-making and implementation process has been shared, including project timelines, roles and responsibilities of staff, delivery center capacity utilization, and the net costs of implementation.

Periodic Change Yes

Samasource has systems for periodically considering and adopting iterations to its core model.

Samasource conducts quarterly “learnings” calls via Google Hangouts that are completely open to the public. During the web call, members of the senior management team field questions from call participants submitted live on Hangouts or emailed in advance. Senior managers also give brief updates on key program successes and learnings from the quarter. The quarterly "learnings" calls are a chance not only to promote transparency, but also to source for potential iterations from outside the organization.

Besides analyzing survey data to inform the design of impact programming, the Samasource Impact Operations Manager conducts about four agent interviews or focus groups every week to source ideas for potential impact programming sessions. The interviews and focus groups also allow Samasource to source ideas for iterations more generally, not just for impact programming.

Documents shared and interviews conducted with the Samasource team demonstrate that staff members, at multiple levels of the organization and across functional divisions, are able to articulate which iterations are under consideration and which are under testing. Management staff identified clearly the various hypotheses currently under testing for Samaschool USA, and documentation of the data they are using the test these hypotheses has also been made available. Samasource also shared a roadmap outlining what impact programming sessions are being piloted and launched, and in which implementation sites, for each quarter in 2016.

With six iterations in the past three years, including the cessation of two programs that were determined to need more external resources or a critical rethinking in design, Samasource has proven that it is not only unafraid of change, but also embraces change. Staff testimonials show that Samasource has successfully cultivated a “Silicon Valley culture” of organizational agility and innovation.



Nonprofit Details

Legal Name Samasource
EIN 26-2547062
Founded 2008
Chief Executive Leila Janah
Revenue $9,468,414 (2015)
Contact email

Physical and mailing:
2017 Mission St.
San Francisco, CA 94110

Note to donors

To donate, please visit or contact

Impact Audit Detail


Evidence review, document and data review, headquarters visit, field visit to partner centers in Uganda, field visit to partner and delivery centers in Kenya, field visit to Samasource Training program in Kenya, senior management interviews, field staff interviews, participant interviews and key informant interviews.

Completed 2017-01-31
Published 2017-02-23
Valid through 2019-02-28
Impact audit team

Tamsin Chen, Elijah Goldberg and Ben Mazzotta

Conflict disclosures

This impact audit was commissioned and paid for by Samasource. Kevin Starr (member of ImpactMatters’ board) is a former funder of Samasource through Mulago Foundation. No other conflicts.


A/B Test

An A/B test compares the current version of the program to a modified version in order to test which version is more effective at changing engagement, outcomes or some other metric of interest. A/B tests do not have a pure control group and are not designed to test the overall impact of a program. Instead, they are intended to improve the design of a program by determining whether a nonprofit should modify its program or keep it as is.

Activity Data

Activity data is a form of monitoring data that tracks program activities completed and outputs delivered. Activities are the day-to-day tasks an organization must undertake in order to provide a product or service. Each program activity has at least one output associated with it. Outputs are the products or services produced by the nonprofit.

Additive Iteration

An additive iteration is a change to a nonprofit’s program that adds a new component, as opposed to modifying an existing component or removing a component. When assessing how a nonprofit learns and iterates, an additive iteration has a lower burden to justify adoption if it meets three conditions: (1) it is unlikely to have a negative impact (but may have no impact), (2) is unlikely to reduce the impact of other components of the program and (3) does not significantly increase program costs.


Current line-level microtask workers at Samasource. See also Alumni.


Former agents; individuals who previously worked as line-level microtask workers at Samasource. See also Agents.


Applicability of evidence to a nonprofit’s program includes two distinct concepts: quality and relevance. Quality captures the internal validity of the evidence: is the evidence free of factors that may bias the reported findings? Relevance captures the external validity of the study to the nonprofit’s intervention: to what extent do we expect the intervention to generate similar impact as the findings observed in the study?


Attrition refers to cases where members of a sample drop out between rounds of data collection. For instance, if a 100 people are surveyed at the beginning of the program but only 90 can be surveyed at the end of the program, the attrition rate is 10%. Attrition can be problematic if attrition from the sample is correlated with outcomes. For instance, when following up on a health intervention, those who are sick may be more difficult to find than those who are healthy. As a result, the reported results may be biased because they include outcomes for fewer sick individuals.

Average Costs

Average costs are the total amount of money spent by the nonprofit divided by some unit of output or outcome. Average costs include costs that are fixed and not expected to increase as outputs or outcomes grow, such as salaries of senior managers. See also Marginal Costs.


Bias is a non-random error in a statistical estimate. Whenever estimates are based on a sample from a larger population, there will be random error in that estimate: no two samples will produce exactly the same estimates. An estimate is biased when those errors lead it to be consistently above or below the true value that is being estimated.

Business Process Outsourcing (BPO)

Contracting out of non-primary business activities, such as customer/call center relations, payroll and accounting, to a third-party provider.

CART (Credible, Actionable, Responsible, Transportable)

The CART standard is a method for understanding the quality of monitoring systems. CART stands for:

  • Credible: Monitoring systems are credible if they collect high-quality data that is analyzed accurately.
  • Actionable: Monitoring systems are actionable if the nonprofit commits to act on the data that it collects.
  • Responsible: Monitoring systems are responsible if the nonprofit minimizes the burden of data collection and collects data ethically.
  • Transportable: Monitoring systems are transportable if the data collected is tied to the nonprofit’s theory of change and is shared appropriately.
Classroom-based Training

Training that takes place off-site in a classroom or other training facility. See also On-the-job Training.


A study is cluster-randomized if the randomization was performed at the group (or cluster) level, instead of the individual participant level. Types of clusters include, but are not limited to, villages, schools and districts. See also Randomized Controlled Trial.


Definitions of consumption vary, but it tends to be defined as those goods and services consumed by individuals. In economic development, there are particular measures of consumption that are important, including food consumption, non-durable consumption (items that have a short lifespan, such as clothing) and durable consumption (items that have longer lifespans, such as appliances).

Control Group

A control group is a group of participants who did not receive the intervention. Control groups enable nonprofits and researchers to compare what happened to beneficiaries in their program to what might have happened if they were not in the program. See also Treatment Group.

Cost of Impact

The Cost of Impact is a ratio of impact per dollar spent by the nonprofit. This estimate provides the best guidance to donors about what a contribution to the nonprofit could achieve. The Cost of Impact framework has the additional benefit of being applicable when benefits are not measured in dollars (for instance, lives saved or additional years of education).

Counterfactual; Counterfactual Evidence

The counterfactual is what would have happened in the absence of a program or other event. Understanding the counterfactual is essential to understanding the impact of a program. Participant outcomes may change over time for many different reasons not related to the program. By comparing the difference between participant outcomes and counterfactual outcomes, the impact of a program can be estimated.

The counterfactual cannot be directly measured, as researchers cannot observe the same participant both participating and not participating in the program. However, it can be approximated by randomizing participants into an intervention group and a control group, and then comparing outcomes across the two different groups.


Obtaining paid or unpaid services, ideas, content and funding from a crowd of people, usually via the Internet; portmanteau of "crowd" and "outsourcing."


A nonprofit at the design stage has a program model that is undergoing change.

Digital Economy

The global marketplace of economic activities enabled by Information and Communications Technologies (ICT). Also known as the "Internet economy," "new economy" and "web economy."

Digital Freelancing

Participating in income-generating activities on a self-employed and temporary basis, as facilitated by Information and Communications Technologies (ICT). Common digital freelancing activities include web design and computer programming.

Digital Literacy; Information and Communications Technology (ICT) Literacy

A person's ability to use Information and Communications Technologies (ICT) to search for, analyze, use, produce and transmit information.

Discount Rate

People tend to value benefits in the future less than benefits in the present, for three primary reasons. First, benefits today can be reinvested and generate some return. Second, the future is uncertain, and we are often uncertain if future benefits will actually materialize. Third, most people are impatient, and prefer immediate gratification over future gratification. A discount rate captures this by discounting or reducing future benefits compared to current benefits.

Donor’s Cost of Impact

The Donor’s Cost of Impact is the ratio of net impact (gross impact less beneficiary costs) to the donor’s cost. Importantly, the Donor’s Cost of Impact, unlike the Cost of Impact, does not capture societal costs not paid by the donor. For instance, if a program is co-funded by a government grant, the net impact of the program is compared to just the donor’s costs, yielding a higher ratio than the Cost of Impact, which would include the donor’s and government’s costs.

Economic Significance

“Economically significant” results means the study found an effect of an intervention (say increased literacy) that is not only statistically significant (i.e. unlikely to arise by chance), but also is of a size that is “meaningful.” For instance, a 1% change in income may not be meaningful enough to invest in the program, but a 1% change in temperature may be. Economic significance combines the effect size, the statistical significance and the context to make a statement about whether that particular intervention achieves something that is “worth it.” Economically significant results are also commonly referred to as “important results” (in contrast to “significant results”, which implies statistical significance).

Effect Size

How large the measured impact was on outcomes in the group receiving the program compared to a similar group that did not receive the intervention.

Engagement Data

Engagement data is a form of monitoring data that tracks initial take-up of the program and how people interact with the product or service. For instance, if individuals are offered a savings account, engagement data might include how many people accept the offer and open a savings account, how many times people deposit and withdraw, how many times people check their balance and similar measures of how people interact with the product.


A person employed to collect data. Enumerators are often hired by survey firms to collect data on behalf of a study or nonprofit. Enumerators are often, but not always, independent of the program delivery staff.

Evidence from Elsewhere

In an impact audit, evidence from elsewhere includes studies – such as randomized controlled trials, quasi-experimental studies, laboratory results and systematic reviews – on interventions that are similar to the nonprofit’s intervention. The motivating theory behind using evidence from elsewhere is that there exists some true effect size for a specific intervention (or more realistically, a range of true effect sizes). If the same intervention has been measured elsewhere and shown to produce a particular effect – and that intervention has some true effect size – one should expect the same intervention, given a similar context and quality of implementation, to have a similar effect size (after accounting for random noise).

External Validity

See Relevance.


An externality is a consequence or effect of an activity that is not reflected in the cost of the goods or services exchanged. Externalities affect third parties, and those effects can either be positive or negative. Nonprofits often exist to correct externalities, such as pollution. Nonprofits can also themselves generate externalities, such as positive economic growth in a community when they provide services to some community members.

Extreme Poor

Extreme poverty is defined by the United Nations as “a condition characterized by severe deprivation of basic human needs, including food, safe drinking water, sanitation facilities, health, shelter, education and information.” One feature of the extreme poor is that they often cannot be defined in terms of simply geography – for instance, within a poor village, there may be a stark difference between the living conditions of the poor and those of the extreme poor.

Fair Wage Guide

A country-specific guide for fair/living wages, produced by Good World Solutions. See also Living Wage.

Feedback Data

Feedback data is a form of monitoring data that gives information about the strengths and weaknesses of the program from participant or other stakeholder perspectives. Feedback data can provide valuable information about how to improve program design.

Food Security

Food security is defined as having consistent and reliable access to a sufficient quantity of affordable and nutritious food.

Gig Economy

The marketplace through which labor for gigs (one-off projects or tasks) are bought by consumers and sold by independent contractors/freelancers. Common gigs include driving for peer-to-peer transportation services like Uber and Lyft, and listing one's residential property on a peer-to-peer homestay network like Airbnb. Also known as the "on-demand economy."


Former trainees; individuals who completed the Samasource Training course. See also Trainees.

Graduation Program

A Graduation program is a multi-component intervention designed to help the extreme poor start a livelihood activity. Graduation programs often include initial targeting of the extreme poor, followed by training, selection of a livelihood activity, transfer of cash or a productive asset, and supporting services, including regular coaching and mentoring visits, access to savings or other financial products, and sometimes health or consumption support.

Human Capital

Human capital is all the knowledge, skills, attitudes and experiences that enable people to produce value for themselves or other people or organizations.

Human Intelligence Task (HIT)

Small tasks that need to be performed by humans as opposed to computers. Examples include writing product descriptions, flagging offensive content and digitally cropping photographs. HITs are usually bought and sold on online task marketplaces. The term was popularized by Amazon Mechanical Turk (MTurk). See also Microtask; Microwork and Task Marketplace.


Impact is a change in beneficiary outcomes attributable to a nonprofit’s activities and outputs. See also Outcome Metrics; Outcomes.

Impact Agent/Worker; Non-impact Agent/Worker

An impact agent is a Samasource worker who fully satisfies Samasource's targeting criteria, while non-impact agents are those who are above targeting cut-offs. Specifically, workers who are currently attending school and not living in a designated area of need within Kenya are non-impact workers, as are workers who have a high school certificate, work in the formal sector and earn above the benchmark weekly pay, and workers who have a college or master's degree, work in the formal sector and earn above the benchmark weekly pay. Samasource hires both impact and non-impact agents, but has explicit targets for the percentage of impact agents hired.

Impact Sourcing

Outsourcing jobs to disadvantaged populations, such as low-income women and youth in developing countries, with the general dual aims of both providing gainful employment to those populations and meeting the business needs of those outsourcing their work.

In-person Services

Services that are physically provided in person, such as cleaning, gardening and house-painting.

Independent Evaluator

An independent evaluator can include a research organization or academics engaged to analyze the impact of a program. Independent evaluators are not directly employed by the program, although they may be paid through program resources.

Independent Validation

Independent validation includes all evaluation efforts that include a substantial role for a third-party in the design and analysis of the evaluation. Independent validations do not necessarily need to be conducted at an arm’s length; the nonprofit is often involved in the design and analysis phase, and will be involved in executing the actual program itself and often in collecting data. However, to qualify as an independent validation, a third-party must have a substantial decision-making role in design and overall control over analysis of the evaluation.

Information and Communications Technology (ICT)

The comprehensive set of hardware and software that enable users to search for, analyze, use, produce and transmit information. ICT is an extension of the term "information technology."

Internal Evaluation

Internal evaluation includes all efforts by the nonprofit itself to evaluate the impact of its work. Internal evaluation can include anything from collecting outcomes before and after implementation to conducting a randomized controlled trial.

Internet Technology-Enabled Services (ITES)

Activities that exploit Information Technology (IT) to raise an organization's efficiency. Examples include call centers and teleworking.


An “intervention” is what researchers study and nonprofits do. An intervention includes anything from a medical procedure to a conditional cash grant. ImpactMatters studies the intervention that a nonprofit implements, mapping that intervention to the evidence base out there on that particular intervention.

Job Readiness

The extent to which an individual has the foundational skills needed to be minimally qualified for a given job. Job readiness usually encompasses literacy and numeracy, basic digital literacy, and soft skills such as responsibility, workplace discipline and teamworking skills. Also known as "work readiness." See also Digital Literacy and Soft Skills.

Learning and Iteration

Learning and Iteration is the section in the impact audit that assesses and provides a rating for the historical processes the nonprofit has used to determine changes to the design of its intervention. We rate how well the nonprofit uses data to learn what does and does not work, and then appropriately iterates on its model.

Living Wage

The minimum income considered necessary for an individual to maintain a safe, decent and dignified standard of living in their community. A living wage accounts for at least the costs of housing, food, childcare, transportation, healthcare and taxes. Employers can choose whether they pay a living wage; by contrast, the minimum wage is a legal requirement for formal sector employment. Also known as "fair wage."

Marginal Costs

The incremental change in total cost due to increasing the quantity produced by one unit. In an impact audit, for example, marginal cost refers to the change in total cost incurred when one more participant is served in the nonprofit’s program. See also Average Costs.

Market Failure

A market failure is a situation in which the allocation of goods and services is not efficient. There exists another conceivable outcome where individuals may be better off without making anyone else worse off.

Microtask; Microwork

Small tasks that collectively make up a larger project, completed by multiple microtaskers/microworkers over the Internet. The term was created in 2008 by Samasource founder, Leila Janah.

Multiple Treatment Arm Randomized Controlled Trial

A randomized controlled trial that uses multiple treatment groups to simultaneously test variations of an intervention or disentangle effects of multi-component interventions. See also Randomized Controlled Trial.


Outsourcing business processes overseas in order to take advantage of lower costs. Commonly offshored business processes include manufacturing and customer service.

On-the-job Training

Training that takes place at the job site during normal working hours, after the employee is already hired. See also Classroom-based Training.

Online Outsourcing

Outsourcing of business processes that are delivered and paid for over the Internet.

Outcome Metrics; Outcomes

Outcome metrics are a direct measure of the success of the program in addressing the underlying problem. For example, in a malaria control program, the number of households with sufficient insecticide-treated bednets would be a process metric and the rate of malaria infections in the zone would be a measure of outcomes. See also Process Metrics.

It is important to emphasize that change in outcome metrics is still not sufficient to document impact, since there is no counterfactual comparison. But the unit of measure of the outcome (malaria prevalence) is the same as the measure of impact, since the measure of impact is a simple arithmetic difference between the observed outcome and the estimated counterfactual outcome

Payback Period

The length of time required to recover the cost of an investment. In an impact audit, the payback period is the number of years that must elapse before cumulative benefits exceed the costs of the intervention.


The Plan-Do-Study-Act cycle is a repetitive four-step model for carrying out change in an organization. In an impact audit, nonprofits are assessed on whether iteration ideas were sourced; tested; analyzed and summarized for decision-makers; and then accepted and implemented systematically. Also known as the Plan-Do-Check-Act cycle, Deming cycle and Shewhart cycle.

Poor Information

Poor information refers to a market failure wherein one or more parties has imperfect knowledge when transacting, investing, or establishing behavioral norms.

Pre-post Comparison; Before-and-after Comparison; Reflexive Comparison

Comparing the outcomes of a treatment group before and after receiving the intervention. The pre-intervention outcomes serve as a (poor-quality) estimated counterfactual. See also Counterfactual.


The problem comprises a target population that suffers from an underlying market or government failure (referred to as the source of the problem), leading to a social inefficiency. See also Social Inefficiency.

Process Metrics

Process metrics describe delivery of goods and services and observable behavior changes in the target population. See Outcome Metrics; Outcomes.

Purchasing Power Parity (PPP)

The purchasing power of a currency is the quantity of the currency needed to purchase a common basket of consumer goods and services. PPP equalizes the purchasing power of two given currencies by accounting for differences in the cost of living and inflation in the two countries.

Quality of Impact Evidence

Internal validity is the extent to which we are able to say that no other variables except the one under study caused the result. In other words, high internal validity denotes a degree of confidence that we can attribute causation (in some ways, another way of saying “impact”) to the intervention.

Quality of Monitoring Systems; Monitoring Systems

Quality of Monitoring Systems is the section in the impact audit that assesses how well the nonprofit produces and uses data to ensure it is consistently delivering its program at high quality. Monitoring systems track every step required in the delivery of the intervention using five types of data: activity, targeting, engagement, feedback and outcomes data. In an impact audit, monitoring systems are assessed to determine if they fulfill the CART standard. See also CART.

Randomized Controlled Trial

A randomized control trial is an evaluation design by which individuals (or groups) are randomly allocated into treatment and control groups, where the treatment group receives the program. The outcomes of the two groups are then compared in order to estimate effect size. See also Effect Size.


External validity has two meanings. In the more general sense, it means, how sensitive is this program to context? In other words, if we tried the same thing elsewhere, how confident are we that we would find the same results?

Within the context of this impact audit, we use a more narrow definition: “external validity” compares the findings of a particular study to the nonprofit’s program to determine whether the conditions under which that study were implemented are similar enough to believe they would hold for the nonprofit’s program instead.

In general, we consider four dimensions of comparability:

  • Intervention design: What components were included in the intervention? No two interventions will be exactly the same, and here theory places a valuable role in understanding whether any differences in design are likely change the “mechanism” through which the program works.
  • Intervention fidelity: How “well” was the intervention implemented? The same design can be carried out well or poorly. If you held a training on the exact same material, but one was carried out by a native speaker and the other by only a proficient speaker, we would consider the latter to potentially have lower “intervention fidelity”.
  • Local context: How similar are the geographic areas, and the accompanying social, cultural and political structures of those areas? This is challenging to assess, given the complexity of human nature. One approach here is to replicate across different settings and examine differences in effect size. Another is to look at the mechanism through which a program works – for instance, providing a woman with a grant to start small shops – and see if the market failure (credit constraints) applies elsewhere. If it does, an intervention adjusted for that context that does a similar thing – for instance, providing a woman with a grant to purchase livestock – is likely to work as well.
  • Targeted population: Does the intervention target generally the same group of people? This is challenging as well. However, looking for similarities in economic situation (such as credit constraints) or in other concrete similarities that motivate a program (such as too poor to afford health care services) is one approach to mapping population external validity.
Restricted Donations

A nonprofit’s use of restricted donations is limited to particular purposes by the donor. See also Unrestricted Donations.


Samasource’s proprietary task management and database platform.

Sample; Sample Size

The sample is the portion drawn from a population for testing or analysis that is intended to enable statistical estimates of the behavior or attributes of the whole population. The sample size is the number of units that comprise the sample; a large enough sample size allows inferences about the whole population to be made.

Savings and Credit Constraints

Savings and credit constraints exist when people are limited by a lack of resources saved and a lack of borrowable resources, and are therefore unable to make productive investments that could raise their standard of living.


A nonprofit at the scaling stage is in the process of expanding its program.

Social Inefficiency

The social inefficiency is the result of the underlying market and government failures. It is the primary reason that the nonprofit’s intervention is socially beneficial. It effectively answers the so-what question: if a skeptic is willing to grant that the underlying market or government failure exists, then, “So what?”

Social Media Marketing

Activities aimed at increasing traffic and sharing/trending for a particular site or topic using social networking websites and applications such as Twitter and Facebook.

Social Rate of Return (SRR)

The SRR is the future discount rate at which benefits equal costs. An SRR of 100% implies that benefits will equal costs when all future benefits are discounted 100%. The SRR accounts for extra-financial benefits and costs – that is, all social benefits and costs not included in conventional financial accounts.

Soft Skills

Personal attributes that enable people to effectively work well with others and often in a professional setting. Soft skills include communication skills, teamworking skills, responsibility and motivation. Soft skills are often contrasted with hard skills, which refer to technical/occupational knowledge and skills.

Statistical Significance

A statistically significant result (often a difference of means of the main outcome of interest) is a result that is unlikely to arise as a result of chance. This doesn’t mean the finding cannot be due to chance – just that it is very unlikely.

Systematic Review

A type of literature review that collects and analyzes multiple research studies in order to answer a research question. After a research question is defined and appropriate research studies identified, data from the studies are extracted, assessed for their quality, analyzed, sometimes statistically combined in meta-analyses, and reported in such a way as to address the research question.

Targeting Data

Targeting data are one form of monitoring data that tracks the identification of beneficiaries to receive the program.

Task Marketplace

Usually online, a task marketplace is a platform for buying and selling completion of tasks, where tasks may be digital (e.g. social media marketing) or in-person (home repairs).

Technical and Vocational Education and Training (TVET)

Training focused on producing readily-employable personnel. TVET encompasses trade-specific skills and knowledge, as well as general job readiness and soft skills. See also Job Readiness and Soft Skills.

Theory of Change

A theory of change connects the problem to the intervention the nonprofit runs to expected process and outcome metrics. The objective of a theory of change is to provide a testable hypothesis for why the intervention is solving some problem that will lead to positive changes for the targeted beneficiaries. In an impact audit, ImpactMatters requires that the problem be framed in terms of a market failure or government failure.


Individuals currently enrolled in the Samasource Training course. See also Graduates.

Treatment Group

In an experiment, the treatment group is comprised of experimental subjects that receive the treatment being evaluated. Also known as an intervention group. See also Control Group.

Unrestricted Donations

A nonprofit’s use of unrestricted donations is not limited to any particular purposes by the donor and may be used as the nonprofit sees fit. See also Restricted Donations.

Unconditional Cash Transfer

An unconditional cash transfer is a cash grant to a recipient whose use of the grant is not limited to any particular purpose or tied to the recipient’s fulfillment of any conditions.


A nonprofit at the validation stage is testing its program’s impact.

Reference List

  1. Akerlof G. The Market for “Lemons”: Quality Uncertainty and the Market Mechanism. Q J Econ [Internet]. 1970; Available from:
  2. Samasource Impact Reports & Financials | Impact Sourcing [Internet]. [cited 2016 Dec 2]. Available from:
  3. Borokhovich M, Chatterjee A, Rogers J, Varshney LR, Vishwanath S. Improving Impact Sourcing via Efficient Global Service Delivery. In: Bloomberg Data for Good Exchange Conference. New York City; 2015.
  4. FAQ | Samahope [Internet]. [cited 2016 Dec 2]. Available from:
  5. Samasource | A Non Profit Organization | Impact Sourcing [Internet]. [cited 2016 Dec 2]. Available from:
  6. Samasource | Impact Methodology and Measurement [Internet]. [cited 2016 Dec 2]. Available from:
  7. Adda J, Dustmann C, Meghir C, Robin J. Career Progression, Economic Downturns, and Skills. NBER Work Pap Ser. 2013;(1889).
  8. Godlonton S. Returns to Work Experience: An Experimental Approach. 2012.
  9. Marinescu I, Triyana M, Altonji J, Shakotko R, Altonji J, Williams N, et al. The sources of wage growth in a developing country. IZA J Labor Dev [Internet]. 2016 Dec;5(1):2. Available from:
  10. Poverty Action Lab Policy Bulletin: Where Credit Is Due [Internet]. 2015. Available from:
  11. Kenya National Bureau of Statistics. Economic Survey [Internet]. [cited 2016 Dec 13]. Available from:
  12. Ranchod R. The Impact of Samasource - BORGEN [Internet]. 2015 [cited 2016 Dec 2]. Available from:
  13. Gino F, Staats BR. Samasource: Give Work, Not Aid. In: Harvard Business School NOM Unit Case 912-011. Harvard Business School Publishing; 2011. p. 23.
  14. Samasource | High Quality Ouput | Walmart - Case Study [Internet]. [cited 2016 Dec 2]. Available from:
  15. Dolan KA. Samasource Taps Silicon Valley To Create Jobs For Poor People [Internet]. 2011 [cited 2016 Dec 2]. Available from:
  16. Samasource | High Quality Ouput | Getty Images - Case Study [Internet]. [cited 2016 Dec 2]. Available from:
  17. Samasource | High Quality Output | 360 Incentives - Case Study [Internet]. [cited 2016 Dec 2]. Available from:
  18. Konnikova M. Bringing the Rural Poor Into the Digital Economy [Internet]. 2015 [cited 2016 Dec 2]. Available from:
  19. Jiang L, Wagner C, Nardi B. Not Just in it for the Money: A Qualitative Investigation of Workers’ Perceived Benefits of Micro-task Crowdsourcing. In: 2015 48th Hawaii International Conference on System Sciences [Internet]. IEEE; 2015. p. 773–82. Available from:
  20. Upadhya C, Vasavi AR. Indo-Dutch Programme for Alternatives in Development WORK, CULTURE, AND SOCIALITY IN THE INDIAN IT INDUSTRY: A SOCIOLOGICAL STUDY [Internet]. 2006. Available from:
  21. Chinen M, Hoop T de, Balarin M, Alcázar L. Vocational and Business Training to Increase Women’s Participation in Higher Skilled, Higher Valued Occupations in Low- and Middle-Income Countries - A Systematic Review. 2016;
  22. Katz E. Programs promoting young women’s employment: what works. Adolesc Girls Initiat An Alliance [Internet]. 2008; Available from:
  23. Blattman C, Ralston L. Generating Employment in Poor and Fragile States: Evidence from Labor Market and Entrepreneurship Programs. SSRN Electron J. 2015;
  24. McKenzie D, Woodruff C. What Are We Learning from Business Training and Entrepreneurship Evaluations around the Developing World? World Bank Res Obs. 2014 Feb;29(1):48–82.
  25. Tripney J, Hombrados JG, Newman M, Hovish K, Brown C, Steinka-Fry K, et al. Post-Basic Technical and Vocational Education and Training (TVET) Interventions to Improve Employability and Employment of TVET Graduates in Low-and Middle-Income Countries: A Systematic Review. Campbell Syst Rev. 2013;9(9).
  26. Attanasio O, Kugler A, Meghir C. Subsidizing Vocational Training for Disadvantaged Youth in Colombia: Evidence from a Randomized Trial. Am Econ J Appl Econ. 2011 Jul;3(3):188–220.
  27. Card D, Ibarrarán P, Regalia F, Rosas-Shady D, Soares Y. The Labor Market Impacts of Youth Training in the Dominican Republic. J Labor Econ. 2011 Apr;29(2):267–300.
  28. Adoho F, Chakravarty S, Korkoyah DT, Lundberg M, Tasneem A. The Impact of an Adolescent Girls Employment Program: The EPAG Project in Liberia. The World Bank; 2014 Apr. (Policy Research Working Papers).
  29. Cho Y, Kalomba D, Mobarak AM, Orozco V. Gender Differences in the Effects of Vocational Training Constraints on Women and Drop-Out Behavior. Washington DC; 2013. (Policy Research Working Paper). Report No.: WPS 6545.
  30. World Bank Country and Lending Groups – World Bank Data Help Desk [Internet]. [cited 2016 Nov 21]. Available from:
  31. Kenya vs. Liberia - Country Comparison [Internet]. [cited 2016 Dec 2]. Available from:
  32. Kenya vs. Dominican Republic - Country Comparison [Internet]. [cited 2016 Dec 2]. Available from:
  33. Kenya vs. Malawi - Country Comparison [Internet]. [cited 2016 Dec 2]. Available from:
  34. Crepon B, Duflo E, Gurgand M, Rathelot R, Zamora P. Do Labor Market Policies have Displacement Effects? Evidence from a Clustered Randomized Experiment. Q J Econ. 2013 May;128(2):531–80.
  35. Fair Wage Guide - Good World Solutions [Internet]. [cited 2016 Dec 2]. Available from:
  36. Samasource | Data Project Management | Data Solution for Businesses [Internet]. [cited 2016 Feb 12]. Available from:
  37. Deming WE. Out of the Crisis. Bloomsbury Business Library - Management Library. 1986. 80 p.
  38. Q3 2016 IMPACT SCORECARD [Internet]. [cited 2016 Dec 2]. Available from:
  39. Samaschool | Kenya [Internet]. [cited 2016 Dec 2]. Available from:
  40. Huff C, Tingley D. “Who are these people?” Evaluating the demographic characteristics and political preferences of MTurk survey respondents. Res Polit. 2015;2(3).
  41. Ross J, Irani L, Silberman M, Zaldivar A, Tomlinson B. Who are the Crowdworkers?: Shifting Demographics in Amazon Mechanical Turk. altCHI Sess CHI 2010 Ext Abstr Hum factors Comput Syst [Internet]. 2010; Available from:
  42. Kuek SC, Paradi-Guiford C, Fayomi T, Imaizumi S, Ipeirotis P, Pina P, et al. The Global Opportunity in Online Outsourcing [Internet]. 2015. Available from: