Integration with Salesforce by Custom Code

A headshot for Jay Mehta, Prolocity's PROserve Essential lead

Jay Mehta
PROserve Essentials Lead

We've reached the end of our series! So far, we've covered building integrations via manual upload, AppExchange packages, Zapier, and Flow with HTTP Callout. But if all other options fail, our last alternative is the most powerful: using custom Apex code.

Some Apex Basics

Apex is Salesforce's built-in development language. It is an object-oriented, Java-like language that is specially architected to operate in Salesforce's multi-tenant environment. Given its attributes, developers with experience in languages like Java will find much that's familiar, enabling them to be productive quickly.

Among other things, Apex lets developers query records and perform actions on them on the Salesforce platform, integrate with other systems through callouts, and handle errors that occur. Basically, if you can build a requirement for it, Apex can probably build a technical solution to satisfy it.

When should you choose Apex?

Apex allows building logic that can perform callouts that are more sophisticated than what is possible with Flow and HTTP Callout. Basically, just like Data Loader is the bigger, stronger, faster version of Data Import Wizard, Apex callouts are the bigger, stronger, faster version of Flow and Callout. You can even use Apex to build inbound listeners that allow external systems to send data to Salesforce (although you'll need to be careful about authorization and access if you do that), which is not possible using Flow.

Apex also gives full access to the Salesforce platform's functionality in ways that Flow doesn't. For example, if you want to trigger an action to occur when a user replies to a Chatter thread (for instance, to send a copy of that reply to an external system), there is no way to do that with Flow because it does not have access to monitor the FeedComment object for new records. Apex doesn't have that limit–so there are use cases where it's the only valid tool for the job.

For mission-critical integrations (like those with financial systems), Apex also offers more sophisticated error handling, debugging, and logging. While Flow is improving in these areas (with additions like Fault Paths and custom error messages), Apex will almost always offer more flexibility in running through custom correction and retry logic. For example, if a callout from a Flow fails because the external system is temporarily offline, there's no easy way to re-execute it. But with Apex, a developer could monitor the response and add the request to a retry queue to be automatically re-sent later.

Finally, Apex isn't subject to the same performance limits that tools like Flow are. If you need to process high volumes of data (for example, tens or hundreds of thousands of records at a time), Apex is a much better–and sometimes the only–option.

The Apex Development Process

Apex development is far more structured than many other configuration and development tasks. You can build and edit Flows in Production (which, officially, isn't best practice but is often the path smaller organizations take), but Apex code can only be added or modified in a sandbox environment. That code should be thoroughly tested in sandbox before being deployed to production. Those deployments can be performed using Change Sets, or using more sophisticated deployment tools like Copado or GearSet.

Speaking of testing, Apex requires developers to write test classes that are designed to prove that the code works as intended. These classes must pass before a deployment to production is allowed. Every Salesforce org must reach a code coverage ratio of 75% in order to deploy Apex code from sandbox to production, which means that if you write 100 lines of code, your test classes must test at least 75 of them to meet the ratio. That's an oversimplification (because code coverage is calculated across the entire Salesforce org, not a single class or trigger), but it's a good rule to follow on all individual code elements to make sure you stay well above the 75% threshold.

At the beginning, writing test classes–which can sometimes be longer than the code you're actually testing, depending on how thorough you are–feels like it simply adds overhead to getting you ready for moving code to production. But in the long run, Salesforce believes that adopting a test-driven development strategy both saves time and produces better code because it makes it easier to determine when changes may introduce unexpected errors in unrelated functionality. For example, if a developer is working on a piece of code that's used by multiple other Apex classes, he may not realize he's making a change that will break one of those external pieces. But a well written test class could catch that issue before it gets pushed to production.

While Flow has gained limited test capabilities, this is another area where, if your integration is critical, Apex is superior (even if it takes more effort).


The sky's the limit. That is to say, you're not limited by the product features of an AppExchange package, third-party connector, or tool like Zapier. Generally speaking, Apex code offers freedoms that none of the other integration strategies can, because it exposes the full set of Salesforce functionality and allows you to perform callouts to other systems that even Flow and HTTP Callout can't achieve. For example a popular Electronic Medical Record software–KIPU Health–offers an API that requires a very particular method of authentication on all requests submitted to it. Flow can't do this out the of the box, but Apex can, making it the only option for the job. The primary limits are Salesforce's Apex Platform and API limits.

Apex is also more capable of transforming data than Flow is. For example, if you have a string of text that has items that are separated by a comma, there is no built-in way for Flow to process that data. In Apex, this is child's play (which is actually why Unofficial Salesforce has a string processor package for Flow that simply packages a bunch of Apex actions). If that's the only custom Apex you need in a Flow, installing the package may be the best route. But if you start adding more needs that require more and more packages, just doing it in code eventually feels like the better option.

Operating costs of custom code may also be less expensive than other options. Remember in our Zapier examples that every task performed counted towards the billing limits. For large scale integrations that might use millions of tasks per month, that cost can add up quickly. Custom Apex code may ultimately be significantly less expensive to run as an ongoing cost, assuming you stay within Salesforce's limits.


The sky is, like, very high up there. That is to say, the level of technical knowledge you need to build a custom integration in code is far higher than the skills necessary to install a package or even to build in a point-and-click tool like Zapier or Flow. The richness of the functionality comes with significant additional complexity, which usually translates into additional cost and implementation time versus other integration methods.

Also, give a thought to the maintenance needs. An integration is just like anything else–it will need to be maintained, updated, and fixed when it inevitably breaks (for instance, if one end of the integration changes its API, or if bug fixes require changes). This requires an ongoing investment in staff or support that can easily offset any operational cost savings.

Speaking of maintenance, Apex code doesn't have the same restrictions on it that Flow or Process Builder did to prevent developers from accidentally overwriting functional code in production. For example, once a Flow version is activated, it can never be edited again–you'll be required to create a new Flow or a new version of the existing Flow. That means that if your changes cause problems, you can always roll back to the prior version. Apex doesn't have those protections–if the test classes pass and you deploy to production, you can overwrite working code with now broken code. Good DevOps practices (like maintaining a version control system and performing team code reviews) can protect against these risks–but require more sophistication and resources.

Next Steps

Even though this is the end of our series, it's not the end of the road on integrations. There are also integrations that require a combination of one or more of the methods we've worked through (for instance, using a data upload that then invokes Flows to automate actions in Salesforce, that then potentially fires requests via Zapier to update another system). There are also more sophisticated middleware systems like Mulesoft that we won't cover in detail.

Plus, don't forget that we've only really been talking about the Salesforce side of the house–your integration may require work with an external system or team of developers to ensure that it can send and receive the data it needs in a format it can accept.

Hopefully something in the series has made you consider a way to connect some of the systems you use on a daily basis and given you an idea of what a path to success might look like. If you have any questions, feel free to reach out! And if you're a PROserve Essential subscriber, feel free to grab a slot at an upcoming office hours session: