Tech leaders trust us, not simply because we have been in the business of surveying developers and researching trends for more than 15 years now, but because we have been doing it based on a rock-solid methodology.
Let us walk you through our methodology: how we design our research and surveys, and how we collect, cleanse and analyse the data.
Ask the right questions
First things first: we focus on asking the right questions. At SlashData we may have data in our DNA, but we know that plunging head-first into data is not where your quest for answers should begin. Our first step is to support you in your thought process and ensure that you are asking the right questions for your business problem at hand. Knowing exactly which questions you need answered will help us specify not only what data you need, but also which developer/creator profiles we should target, and how big your sample should be. We then translate your business questions into research questions, and your quest for answers into a research plan - as short- or long-term as you need it to be.
Mind the Source
We run two surveys per year with ±20,000 developers responding to each one. Respondents come from 165+ different countries, different communities and backgrounds, with more joining each survey. Yet, we don’t repeatedly survey the same developers, and we don’t rely on any single panel or channel. We reach out to the global developer population afresh each time. In this way we ensure that we accurately represent the mosaic of all different communities, backgrounds, goals and projects out there. To achieve that, we use more than 80 different channels and offer a diverse set of incentives - donations, tooling, training - that are designed to minimise selection bias. We want to hear from our own developer community each time, as it helps us track the evolution of developer behaviour and explore transitions between platforms or languages, changes in motivations, career paths and more. That said, only up to 10% of survey responses come from our community, and that’s on purpose, as we want to make sure we don’t continuously tap into the experiences of the same pool of people, but reach out to different developer communities out there. Our ultimate goal is to have not just a big-enough-to dig-into sample, but also a high-quality representative sample. .
No stone unturned
Our research covers multiple development areas: Web, desktop, cloud, mobile, industrial IoT, AR/VR, machine learning and data science, games, consumer electronics, embedded development and apps/extensions for 3rd party ecosystems. The majority of developers are active in more than one of these areas simultaneously, and the technology choices they make in one area affects the choices they make in another. That’s why we ask developers about their experiences in all types of projects they are involved in - to get a 360 degree understanding of tech adoption patterns and be able to accurately predict future trends.
Cleansing the data of fraudulent responses
After collecting the data, we need to clean them up. For integrity. Cleansing is a thorough, sophisticated process that goes all the way to the question level, removing bots or respondents randomly clicking through the survey to get to the prizes. At SlashData we have developed sophisticated ML algorithms that identify fraudulent responses and cleanse them out. Based on the metadata that our bespoke survey-taking platform tracks, we are able to outsmart the not-so-honest respondents and call them out.
Weighting the data
When we ensure our data is clean we need to weigh the data to minimize sample bias.
What does that mean? Through vigorous statistical tests and methods, we identify and minimise bias caused by imbalances in our sample.
Say there are too many hobbyists in our sample, that is outside the range that we estimate the real percentage to lie in. We will then ‘weigh down’ hobbyist responses, to stop them from influencing our results out of proportion.
Don’t get wrapped up in the “margin of error”
We get the ‘margin of error’ question a lot. But it should not be the main concern. In fact, the margin of error can be quite misleading if used as the only metric to assess sample and research quality. That is why at SlashData, instead of just quoting margins of error that when used in isolation may misleadingly inflate confidence in a sample, we focus our efforts on obtaining a sample that is as big and random as possible. These are, in fact, the two elements that do lead to a reliable estimate of a margin of error. In other words, it’s not enough to only quote a margin of error.
Have any questions?
Data you can trust
From collection to analysis, and presentation, we ensure that our data and insights provide a satisfactory representation of the software development ecosystem. This way, you may draw conclusions and decide your developer-facing strategy. With confidence.