• Stop being a LURKER - join our dealer community and get involved. Sign up and start a conversation.

What are you doing to replace behavior targeting on Facebook?

You were paying for it the whole time you just didn't know it. Oracle was getting a hefty % of the CPM. It's just like having crappy creative, pay now or pay later. The results were always pretty meh anyways.

We've done extensive A/B testing back when they were available. We have predictive audiences for both sales and service from your database (CRM, DMS). Lookalike audiences based on these outperform Oracle 2:1 or more in every metric. This is how our agency and dealer partners increase reach. We feel this approach works best we do the nerd stuff they do the creative stuff.

There really isn't an easy way for a dealer or agency to do this in house either. The barrier of entry for machine learning is really high. You need to know what you are doing and have enough data so an out of bag testing procedure can be conducted. With an approved integration the sync process takes minutes versus up to up to 72 hours. The integration is not easy either, Facebook changes the rules every month. I am pretty sure we are one of the only ones in all auto that have it. This means the audiences update every day. We do this to ensure each of our 11 audiences are as relevant as possible. So if a customer buys, we remove them. We also use a data hybridization process to combine the PII from all past transactions as well as the CRM data. This ensures the highest potential match rates. Btw you have to prepare all of the data to their standard. This requires code or ninja level Excel game...we code. You also need a few million sales and service ROs to train the models.

You can't just import purchased lists. Today you have to declare at a record level whether the data is 1st party or not. We started work on this in Oct of last year so we've seen some stuff. Anyone that has ever been on one of our demos will tell you we eat, breathe and sleep audiences and want you to use them everywhere not just FB.

End of blatent self promotion:banana::banana2:


Ah, thanks for all that info! So I understand that can't just buy the data and put it on fb ourselves because fb only allows you to upload first party data now. However, I see this option to share an audience with another account:

https://business.facebook.com/business/help/1676733802590358

I was thinking maybe we could just pay some data broker to share a few audiences with us. Is that what you guys do? Just share the audience? I would prefer to continue running the ads myself.

I took a course on machine learning. I'm curious how you're using it? To build audiences?
 
Ah, thanks for all that info! So I understand that can't just buy the data and put it on fb ourselves because fb only allows you to upload first party data now. However, I see this option to share an audience with another account:

https://business.facebook.com/business/help/1676733802590358

I was thinking maybe we could just pay some data broker to share a few audiences with us. Is that what you guys do? Just share the audience? I would prefer to continue running the ads myself.

I took a course on machine learning. I'm curious how you're using it? To build audiences?

We do not share the audience as this is generally against best practice as it really limits their usefulness. Instead we connect to your actual FB ad account. This way you can create lookalikes off of any of them. With shared audiences you can't sell them (this violates their rules). You can then build the creative, setup and manage your campaigns. You just select the audience you want to use in the drop down in the ad set.

In terms of our approach for machine learning we have been at it for about 2-3 years and are the 3rd iteration of the platform. At the core of our capability is our universal architecture, we figured out how to standardize each of our CRM and DMS integrations. This gives us a massive data set versus individual models for each integration type. Data preparation and standardization are essential before you can create value.

At this point a lot of companies will tell you the rest is "proprietary" or "magic" this is bull shit and it should make your skin crawl! So I will actually tell you how we do it. Magic means they can't explain it or it sucks and they know it. We are really proud of it :)

We started with a simple premise. About 20% of our data set had 2 or more purchases (millions of records for both sales and ROs). For service the repeat rate is much higher obviously. We evaluated the initial purchase and CRM data and determined which of the values are predictive. This concept involves using regression testing and building a random forest algorithm. This step is critical and takes a lot of time.

You go through each input feature like say "lead velocity" (contract date - lead create date) and determine if you want to include it or not and how to weight them. It is important to understand it is not about having thousands of these, rather the right combination of ones that matter. You can also use simpler approaches that are more flexible. As an example for service, we looked at the previous jobs by category but instead found the total customer pay amount was more predictive and generally more accurate.

Now that you feel your input features are good it is time to test. We chose to isolate the 20% that we already knew the next purchase date. This allows us to simulate the prediction 100 times per batch and see the accuracy. We establish the definition of good (window of success) and see what the accuracy was for each simulation. You tune the input features each run and see if you improved or not.

You have to account for over-fitting. Over-fitting is creating a biased towards the data you have used to train your models. This part is key. We randomly remove 30% of the historical data prior to training the model. Because this data is pure it create a 2nd test to validate the results. We then test the accuracy of this set and compare the results. This is called an out of bag testing procedure.

We did this hundreds of times, simulating 100 each batch and test/tune/adjust until we felt good. We predict an actual date each customer is going to buy, but we establish a window around that time that we feel is adequate for them to be marketing eligible (different for sales and service). Then we established a long term training data set that is based on actual sales data from predictions we have actually made. This allows the models to improve automatically over time.

This pretty much is the process. We did everything in python and R and custom built all of the code and process that does it. Driven Data was bootstrapped and started as a reporting platform so we already had a ton of accurate data. We knew this functionality was essential to our long term growth so we dove in.
 
Last edited:
  • Useful
Reactions: Rick Buffkin