About Recurve

Transform your campaign performance with AI

Fundraising is changing—and we believe it’s time campaign modelling changed with it. What was once considered cutting-edge—machine learning—has now become a vital part of how fundraisers can improve their campaign performance.

We’re excited to share this new chapter with you – bringing together the best of human insight and machine intelligence to help fundraisers achieve even greater things.

Born from years of analytical expertise and months of careful development, Recurve AI takes the power of machine learning and makes it practical, affordable, and refreshingly easy to implement. No more one-off bespoke models that are difficult to refresh and costly to rebuild—instead, you get access to a flexible suite of models that continuously improve with each campaign, delivering smarter selections and stronger results.

We know fundraisers value trusted expertise alongside powerful tools so with Recurve AI, you’ll always have knowledgeable analysts by your side to support you – you won’t be left alone to figure it out like some platforms!

Benefits

We use multiple algorithms that offers you stronger performance than a typical manual selection process or a tool using a single algorithm

No more waiting months for models to be built from scratch. Recurve is a ‘ready-to-go’ model that can deliver high-impact results in just 2–3 weeks without the need for I.T. integration

Recurve AI brings advanced propensity modelling within reach for mid-size and growing charities—removing the high costs and complexity of traditional bespoke solutions while freeing up time from your analysis team

We combine machine learning with human insight. You’ll always have access to a data expert who understands both statistics and fundraising—not just algorithms

Recurve models evolve. Recurve AI continuously adapts to changing donor behaviour, ensuring your campaigns remain effective over time—without the need for constant rebriefs

Recurve AI isn’t about data for data’s sake. Every model is built with clear fundraising goals in mind—whether you’re driving conversions, preventing churn, or increasing lifetime value


Workflow

SIGNAL

Frequently asked questions

Propensity predictions measure the likelihood of an individual performing a specific action, such as making a purchase, repeating a purchase, or churning. By identifying individuals with high propensities for desired actions, charities can effectively target and engage the right supporters at the optimal time, thereby driving revenue growth and retention. Achieving accurate propensity predictions requires a robust propensity model.

Random decision forests are ensemble learning methods composed of multiple individual decision trees. A decision tree is a classifier algorithm that operates like a flowchart, mapping a series of decisions to reach a specific outcome classification.

Instead of relying on a single tree, random decision forests aggregate the results from dozens or even hundreds of trees to significantly enhance prediction accuracy. This ensemble approach gives rise to the “forest” concept. The algorithm analyses the collective structure of these trees, examining their various splits and branches. It identifies and leverages the most informative paths within this forest to move from raw input data towards a desired outcome classification – essentially a “yes” or “no” determination regarding the outcome. For example, classifying someone as a “good potential supporter.”

While the final classification might appear binary (“yes” or “no”), real-world prediction deals with probabilities. The likelihood of someone performing an action is rarely a definitive 100% or 0%. To capture this nuance, the prediction process employs propensity scores. A propensity score quantifies the probability that an individual will perform the target action. For instance, a higher score indicates a greater likelihood of being a good potential supporter. Every individual within the target group receives a score, and those scoring in the top percentiles are typically recommended as the best candidates for achieving the desired outcome.

Because they are built upon multiple decision trees, it’s possible to evaluate the importance of different attributes at each node split within the trees. These ‘feature importances’ reveal which factors contribute most significantly to the likelihood of success or failure for the predicted outcome, providing valuable insights into the drivers of behaviour.

Handling Missing Data: Random decision forests are notably robust when dealing with missing data, especially in contrast to methods like logistic regression or neural networks, which often require complete data inputs. Within a single decision tree, if a value is missing for a specific attribute, the algorithm can explore alternative decision paths. When scaled across an entire random decision forest, this inherent flexibility allows the model to maintain high prediction accuracy even in the presence of incomplete datasets.

Identifying Collinearity and Information Gain: Random decision forests are effective at navigating the challenges posed by collinearity, where attributes are highly correlated. Many attributes in supporter data are linearly related but may not be the primary drivers of a specific behaviour. The random decision forest algorithm focuses on finding areas of greatest information gain. This means it can recognize interdependencies between attributes but will prioritize the feature that provides the most leverage over the prediction. This capability is particularly advantageous when working with large, complex supporter datasets, as human behaviour data is often noisy and doesn’t always conform to standard statistical distributions or assumptions.

Adroit can employ a more sophisticated prediction approach that classifies outcomes using a range of predictive algorithms. This includes individual decision trees, random decision forests, logistic regressions, and neural networks.

At the time of prediction, the system dynamically evaluates and selects the algorithm that performs best for the specific task at hand e.g. Cash to RG conversion. In our experience this tends to be the random decision forest algorithm which is why we use it as our ‘default’ approach.

Our preferred approach is to score your supporters externally to your CRM platform. We extract your dataset, run the scoring scripts, score each supporter and then write back the results as a virtual variable.

Embedding such a process within your CRM platform is beneficial if the frequency for the desired action is high and the frequency of behaviours is also high – hence this approach can work very well within the retail sector. However, in the charity sector the frequency of actions you request of supporters, for instance donate a cash gift, is far less frequent and therefore the need for daily refreshes of the model scoring is simply overkill.

And while our process is highly automated and streamlined, having access to the data ensures we’re better placed to identify any issues and tweak the algorithms accordingly thereby ensuring the models are working correctly.

Absolutely not.

There’s an important distinction between the scoring process and the campaigns selections process. The scoring simply provides you with enhanced information about the supporters’ likelihood to act, which typically is more accurate than manually selecting supporters.

Armed with the information you can still choose to focus your selections upon campaign efficiency (which may return a higher ROI but, in some cases, a lower Gross Income), or focus upon Gross Income (which may generate a lower ROI). And you can also supplement the scoring approach with additional segments that you wish to test within a campaign.



The model will score each supporter hierarchically, but you’ll need to decide the max and min scores you wish to include in the campaign, for instance you may wish not to include the bottom 20% of supporters, or even just go to the top 20%!

Each of these scenarios will create a different set of expected campaign costs and income.
To help with this process and ensure you select the optimum volume that matches your objectives we offer a campaign calculator. By entering the cost per communication, it quantifies the gross income and net income expected for each % of supporters included.

In our experience, if propensity models underperform it’s very often because the implementation is not aligned with the model and the data.

The two most common issues we experience are:
1. Ensuring the initial testing of the model is robust and significant and takes into account relevant campaign factors, such as channel and proposition.
2. A lack of target data, for example if you have never asked one-off donors to become regular givers, there isn’t a lot of target behaviour to work from! In this case you can built a model, but it’ll be based more on proxies and look-a-likes (i.e. similar demographics). In cases like this the proxy model can be the first stage to generate more data which can then be better modelled later on.

Our new approach works on the principle of offering a range of the most commonly required user cases – this helps keeps the costs significantly lower.
We are of course able to build a bespoke model for you; however, the cost of such a model may need to be revised as bespoke models require a significant amount of upfront time and investment.

Not at all.
The term ‘bespoke model’ refers to a specification which is specific to a particular desired action and hence may be of limited use to other charities. For example, if a client were to a request a ‘reactivation from lapsed cash supporters to RG by email’, then this would be a bespoke model as it would have limited appeal to other charities.

However, a model such as a single cash gift appeal has been engineered to cater for multiple charities – however the scoring algorithm is unique to each charity, i.e. the variables identified as discriminating, and the ‘weighting’ of those variables is unique to each charity and each usage.

Think of a model refresh as giving your existing predictive model a little tune-up!

A refresh adjusts the model to take account of changes and shifts in supporter behaviour – someone who once was perhaps lapsing may have given again, or shifted their last gift value. When that happens, your original model might not be quite as accurate as it used to be.
Refreshing it means we bring in the latest data and essentially retrain the model and make sure the updated model is performing well. Doing this regularly – maybe every quarter, whenever things feel different or simply prior to a campaign being launched – keeps your model sharp and helps ensure you’re making selections based on the best possible information.

It’s worth noting that a re-fresh is very much a refresh of the original data sources. A refresh would not include the opportunity to add in significantly different data sources such as a different audience. It can be a bit of a grey area but we’re happy to look at it on a case by case basis.

You’re right – the number of refreshes included within a bundle is different for different type of model. We’ve based the number of refreshes on the typical expected frequency of that activity – for instance you’re likely to run a greater number of cash appeals each year than a legacy enquiry campaign or a campaign to convert cash supporters to a regular gift.

The numbers we’ve quoted are a guide and so your bundle price can be adjusted to reflect if you run more cash appeals, but not less.
If the volume of refreshes is too high then we’re happy for you to use them up within a 24 month window from the model being created, rather than 12months.

During the briefing stage we’ve allocated a good amount of time to discuss your objectives and to work through any questions you may have, either technical or otherwise.

We also provide a ‘Support for implementation & testing’ module that we strongly recommend if you’re relatively new to incorporating models within your selection process. If you require a greater level of support then the allocated time provides for then we’ll make you aware that we may have to charge additional fees based on the additional time.

Our ambition is to bring the cost of modelling down so that more charities can benefit from it, and we can only do this by capping the amount of support we can offer before charging for extra time.



Some models, such as Legacy and Major donor very often benefit from including non-transactional data such as the age of the supporter and/or a measure of their wealth. You may have collected this information through surveys or via third party sources, such as Experian’s Mosaic geodemographic segmentation, CACI’s Acorn segmentation or others.

If you don’t have access to this type of information we can discuss a range of affordable options that have the potential to enhance your model.


Recurve AI Form