our methodology

/Top 100 global tech companies trust our data and survey methodology

Every year we survey more than 40,000 developers in over 150 countries working in web, desktop, cloud, mobile, IoT, games, AR/VR, machine learning and data science.

Data and survey methodology

Our clients are confident in our research to guide them in developing business strategies

Our methodology allows us to understand and profile developers by motivation, geography, age, pro vs hobbyist, area of interest, company size, dev team size, language and more. All this consistently across historical data and without compromising on sample size. Our methodology model is founded on 9 essential and non-negotiable qualities that guarantee the integrity of our research.

Magnitude

SlashData research data and insights come from our flagship Developer Economics online developer surveys. Twice a year we reach out to 40,000+ mobile, IoT, Cloud and desktop developers from more than 150 countries. As a result we get to hear of the opinions and investment decisions of more than 40,000 developers annually. We take pride in running the largest and most global independent survey.

Impartiality

Developer Economics is an independent survey. That means no vendor, community or other partner owns our surveys or data. Our survey respondents do not come from any single developer community - vendor-owned or otherwise. Instead, they come from more than 60 different outreach channels. Therefore, our results are not biased towards the mindset of any single community out there. In fact, we go to great lengths to ensure to the degree possible that all different developer mindsets and geographies are adequately represented in our samples.

Inclusivity

For each survey wave we get a representative sample of 20,000+ developers. We translate our surveys in 8+ languages, including French, Spanish, Portuguese, Russian, Korean, Chinese (Simplified), Vietnamese and Turkish. In that way we ensure that we also reach to non-English speaking communities around the world - in other words, that our results are not biased in favour of English speakers. We also work with 60+ regional and media partners to promote our survey. These range from student communities to professional developer forums and from small local communities (such as Meetups in Kenya) to large vendor communities (such as Amazon, Microsoft and Intel) - and we add new ones with every survey. This way we get to speak to a largely diverse set of developer profiles all around the world.

Consistency

We use panel responses to track developer evolution. Our developer community counts more than 22,000 members from 110+ countries, new developers joining with every survey that we run. Approximately 10% of our sample every time comes from returning panel members. Our returning panelists provide us with a way to track evolution of developer behaviour (such as transitions from one platform or language to another, changes in motivations and career paths). We also use panel surveys to conduct qualitative research. Where greater clarity is needed the data from the large-scale surveys may be complemented with smaller panel surveys focused on specific demographics or technologies. These smaller surveys can delve deep into an area with open-ended questions to measure the qualitative aspects of software development.

Substantive

We go to great lengths to reach developers and not just rely on our own panel. Surveying the same people repeatedly is extremely limiting. Surveying from the same pool of people repeatedly leads to results strongly biased towards the beliefs of certain panelists. If we were to solely rely on a set panel of people we would miss out on important developer behaviours that may be emerging in communities outside our panel. By reaching out to the developer population afresh each time and populating our sample with new developers, we make sure we don’t miss out on new technologies or new “types” of developers that have emerged. We also verify that our results hold, no matter what the sample is. In other words, we can rest assured that the trends we show are true and not the result of just following the same people through our surveys. Our research accurately captures the trends across the developer population, by going to the communities where developers are, rather than asking them to come to our panels. Our weighting methodology ensures that the end results represent the ground truth, not a single community or panel.

Engagement

We incentivise developers to take our surveys with prizes, free content, fun, learning and a chance to have their say. We offer developer-specific prizes as part of a draw. We carefully choose prizes to be of interest only to developers (examples include hardware prototyping boards and IDE licenses) so that we don’t attract non-developers to our surveys. We also release free content that is of interest to developers, such as our State of Developer Nation reports - many respondents tell us that they participate to make these reports possible. We also provide each respondent with customised content in the form of a dashboard where they can see how they compare to other developers. Many developers also tell us they learn a lot through our surveys - and most have a good laugh with the memes we include every few pages. Finally, we also interview selected developers out of those who took part in our surveys and quote them in our reports. A chance to have their say matters to many developers.

Diligence

We cleanse our data and check all responses for integrity. We do this to make sure respondents are not randomly clicking through the survey to get to the prizes. We thoroughly check our raw data set for integrity and unceremoniously throw out any suspicious-looking responses. We run tests based on - among other things - completion times, duplicate responses and consistency of answers to related questions.

Confidence

We weigh results by region, platform and segment to minimize bias. In all analysis it is important to be aware of unavoidable bias in sampling, and to mitigate against it. To that end we employ vigorous statistical tests and methods to identify and minimise bias caused by regional, platform or developer segment over- or underrepresentation in our sample. If, for example, we find that we have too many hobbyists in our sample - as compared to our estimate of the real percentage of hobbyists among developers - we ‘weigh down’ hobbyist responses, so that they don’t affect our results out of proportion.

Breadth

We believe that software is eating the world and we are committed to exploring this fascinating phenomenon in all its expressions across the several different types of development. We cover multiple development areas: web, desktop, cloud, mobile, IoT, AR/VR, machine learning and data science. Last but not least, we also reach out to developers developing messaging bots.

Let's talk about how we can help you understand developers better

contact us