Scuttleblurb on Veeva
My friend Scuttleblurb yesterday wrote a very thoughtful piece on Veeva (link to his Website, and Substack). With his permission, I want to highlight three specific bits from his piece.
First, Scuttleblurb explained Veeva’s myriad products through the lens of a typical clinical workflow which I thought was really helpful to grasp how Veeva’s products fit in. He mentioned it might be a bit “tedious” description, but frankly speaking as an analyst who studied Veeva, I found it to be more clarifying than monotonous. From his piece:
“The sheer number of Development and Quality apps and the alphabet soup of acronyms assigned to them is enough to confuse even a diligent analyst. Rather than cataloguing each offering in isolation, it’s more instructive to walk through a typical clinical trial workflow and see where they slot in along the way (product names in bold).
Before a trial even starts, the sponsor uses QualityDocs to draft and publish the official playbook that everyone across participating clinics is expected to follow. That playbook combines written procedures and standardized forms covering how clinics should run the study, which systems will house key documents, who must be trained on what, what monitors should verify to ensure protocol compliance, and how serious protocol deviations should be handled.
Once the trial guidelines are established, the sponsor or CRO must clear the study plan and recruitment materials with the Institutional Review Board, finalize site budgets, and ensure that lead investigators and site staff have been trained on the protocol. Each of these steps generates its own thicket of paperwork – delegation logs specifying who’s responsible for what, contracts governing how sites get paid, training certifications, and regulatory packets containing the study protocol, safety information, and the credentials of everyone involved.
Study Startup is essentially a checklist for all of that pre-enrollment groundwork, used to assign tasks to relevant parties and track whether contracts have been signed, staff have been trained, physician credentials are on file, and so on. eTMF (electronic trial master file), meanwhile, serves as the study’s document repository, housing the study plan, signed patient consent forms, training records, safety reporting paperwork, and anything else a sponsor might need to demonstrate a clean paper trail to regulators. In short, Study Startup gets a trial to the starting line while eTMF organizes the receipts that prove everything was done by the book.
With site preparation complete, the trial moves to patient enrollment and randomization. Age, weight, lab results, medication history, blood markers, and any exclusionary criteria are all captured in an EDC (electronic data capture) and reviewed to determine each participant’s eligibility. Those who qualify are then randomly assigned to control or treatment groups by an RTSM (Randomization and Trial Supply Management), which also ensures the right kits reach the right patients and that those kits remain adequately stocked throughout.
The EDC, in addition to storing patient data for purposes of determining trial eligibility, stores just about every vital piece of information about the patient from start to finish. It provides structured forms for staff to enter updated labs and vitals, adverse events, and endpoint measurements. It flags missing values and inconsistencies, and provides an audit trail for who changed what and why.
Each of these activities takes place across a distributed set of clinics. The CTMS (clinical trial management system) is the “command center” that looks across them all so a sponsor can see how enrollment is tracking versus the plan, what major deadlines are at risk, when monitoring teams are scheduled to review what the site, which sites are experiencing high staff turnover or protocol deviations, etc. Think of it like a project management hub purpose-built to oversee clinical trial operations.
Running alongside CTMS, the sponsor also deploys a QMS (Quality Management System) to detect and remediate problems and document proof of resolution. So, let’s say a serious adverse side effect was reported late due to some process failure, like maybe a patient shouldn’t have been eligible for the trial to begin with or a site investigator overlooked early warning signs in lab results. This event would be logged, an investigation opened to determine the root cause, and corrective steps – more training, clearer protocol language – put in place.
At several junctures along the path from site prep to study completion, the clinical trial workflow branches off into still more workflows, each with their own stringent data requirements. For example, the blood sample that a nurse draws from a trial participant is sent for analysis to a centralized lab, which registers those samples into a LIMS (laboratory information management system) that keeps track of who has custody of those samples, who performed what tests, the test results themselves, and any abnormal readings.
The lab’s quality team also relies on QMS and QualityDocs to transform raw instrument readings into a sponsor-ready report, one that documents how samples were tested, validates the results, and records any issues that arose along with how they were resolved. QMS and Quality Docs – as solutions that codify standard operating procedures, monitor deviations, and log remedial measures – also extend naturally into the CDMOs that make the drugs. And just as labs use LIMS to store patient sample test results, manufacturers use it to track batch records and drug testing outcomes.
Or, again, consider adverse side effect cited earlier, the one caused by a process error. Well, the mishap, beyond being logged in QMS, spawns its own structured workflow. The sponsor’s safety team opens a case record in Safety, which captures in exhaustive detail the patient’s symptoms, when they emerged, the dose they received, follow-up responses from site staff, the assessment of medical reviewers, and any regulatory reports that need to be filed. SafetyDocs stores the supporting documents tied to these cases, while Safety Signal is used to spot patterns across them.
As the drug moves toward filing (and often well before the trial’s conclusion), sponsors turn to Veeva’s Regulatory software. They use Submissions to manage the documents that comprise the NDA package; Publishing to format and organize those documents to the exact technical specifications regulators require; Registrations to track of what they are allowed to sell in which markets, and at which doses, etc.; and Labeling to draft the official, regulator-approved text that explains how the product should be used safely.
These regulatory applications address a problem that is somewhat removed from the challenges of running a clinical trial, but they depend on the information gathered along the way. The patient trial data in EDC, the proof of compliance documents stored in eTMF, the cases of adverse side effects logged in Safety all feed into Veeva’s regulatory products. You can even think of the trial workflow itself as a regulated production line that culminates in an agency-ready submission package.”
After going through Veeva’s products, Scuttleblurb then discussed the key moat from the entire workflow to drive the point home (emphasis mine):
“…the main point of walking through that somewhat tedious description of the clinical trial process was to give you a sense of the fractal succession of tasks that cascade from each action, because that’s key to understanding why Veeva’s attempt to unify its development suites makes so much sense. Each step of the workflow begets its own burst of data, documents, and approvals, while also activating the next step, which triggers its own deluge of records and sign-offs, and so on. A serious adverse event logged in Safety, for instance, could be linked to supporting patient data in EDC and activate a process change within QMS.
Clinical systems operate at a different order of complexity than CRM. Where CRM orchestrates engagement workflows and segments clients, clinical systems must store and reconcile vast volumes of data, documents, images, patient events, and regulatory approvals; maintain longitudinal records of clinical measurements and lab values; and preserve regulator-grade audit trails for every change made along the way. Keeping all of these artifacts aligned across a patchwork of point solutions is a formidable challenge.
Veeva Vault provides the shared building blocks – security, reporting, APIs, user management – that make this technically feasible. But it is the chain reaction of dependent, regulatorily mandated steps inherent in clinical trials that provides the business logic for why a sponsor or CRO would choose to standardize on Veeva rather than juggle a patchwork of point solutions.”
Of course, no software related piece these days will be complete without an extensive discussion why AI will or will not annihilate the software company at hand. Scuttleblurb had an extensive thoughtful discussion on this point. From his piece (all emphasis mine):
“Pretty much any workflow that can be described in words can be trivially instantiated in software without any programming experience…if not today, then soon. Gone are the days when creating a vertical SaaS requires both domain knowledge and coding skills. A small team of trial safety experts could conceivably create a dedicated safety suite in much less than 8 years.
But this underestimates the value of iteration. Even seasoned domain experts can’t anticipate the full range of edge cases that surface over time in a complex workflow. And could those same safety experts describe in granular detail the QMS workflow that Safety feeds into, or the downstream changes in SOPs and training it triggers in QualityDocs? The interconnected nature of R&D workflows means each of Veeva’s apps is strengthened by its integration with adjacent ones on the same platform. To truly rival Veeva’s R&D offering, a competitor would need to not only match any given app feature-for-feature, but also build out the surrounding upstream and downstream modules that pipe into and depend on it, and then stitch them all into a unified system.
Given the regulatory burden, this can’t just be 80% right; it needs to be all the way there. You can get away with building a stripped down financial research terminal, charging $100/month, and peeling away like 20% of users who are overserved by Factset and CapitalIQ. That doesn’t work in systems where being 90% right is functionally equivalent to being wrong.
The notion that AI is going to annihilate deeply embedded systems of record is kind of a strawman at this point. I’m not sure too many folks really believe this. Instead, the woke thing to argue now is that SoR’s are dumb data repositories and most of the future value will be claimed by third party agents driving workflows on top. My intuition runs in the opposite direction here. Veeva is a repository for data and content, sure, but to get that data in the first place requires chaining together workflows on workflows on workflows, creating audit trails, chains of custody, and organizing it all in a regulatorily compliant way. Everyone will have agents, but who else will rival Veeva’s foundational R&D substrate? And why would that substrate be worth less than the agentic workflow layered on top of it?
I can understand how horizontal agents can cut across the enterprise. Microsoft 365, Workday, and Salesforce are not deeply regulated systems of record. Agents can sit above them, stitching workflows together. They might surface overdue accounts in Salesforce, cross-reference contract terms in SharePoint, and generate revenue recovery projections in Excel. Or open a job requisition in Workday and schedule interviews in Outlook. Maybe a constellation of specialized agents collaborate to accomplish these tasks, much the way humans do.
But unlike general-purpose enterprise workflows, clinical studies are a bounded, regulated process. The R&D modules used to manage them don’t need to coordinate with external enterprise systems and the more of them Veeva controls, the less it needs to coordinate with third-party apps at all. Veeva’s platform, fully realized, is as circumscribed an ecosystem as the clinical trials it helps manage.
Veeva R&D’s core function is to maintain an exhaustive, defensible record of every meaningful trial event. In that context, management sees AI playing an increasingly labor-displacing role, starting with the mundane, manual work that still consumes human time, like interpreting and filing eTMF documents and verifying that forms have been completed according to protocol. More ambitiously, AI could expand into generative territory…designing trial protocols, writing safety case narratives, or drafting responses to questions posed by regulators.
For such use cases, agents that are native to the SoR and work off the full context of structured records, documents, permissions, and audit trails would seem epistemically advantaged vs. external agents that have to pull data through connectors and reconstruct the full story from partial signals. Why would a sponsor that has already standardized on Veeva’s R&D platform look outside it for agents rather than use the ones built directly into the system?
There seems to be this growing fear that: 1) agents will capture more valuable than the SoRs they leverage and 2) the agent layer will be separate from the SoR. In Veeva’s case, I don’t think “2)” holds and as a consequence I’m not even sure “1)” makes much sense given how intertwined the two are. The more legitimate concern is that in their ability to coordinate workflows across systems, AI agents make it easier for large sponsors to continue running mixed best-of-breed stacks, reducing the urgency to migrate to a unified platform. Even so, I’d wager that a unified platform with native agents retains a meaningful edge over horizontal agents stitching together incomplete context across disconnected systems.”
As I mentioned before, if Veeva proves to be vulnerable to AI disruption, then my best wishes to every other enterprise software company out there.
I do, however, have a slight disagreement with Scuttleblurb which I will discuss behind the paywall.
In addition to “Daily Dose” (yes, DAILY) like this, MBI Deep Dives publishes one Deep Dive on a publicly listed company every month. You can find all the 66 Deep Dives here.