Risk.net: Hello, and welcome to this risk.net Q&A, where we'll be talking about xVA calculation and management and the challenges involved and how technology can solve some of those challenges. With me in the studio today are Stuart Nield and Abhay Pradhan of the Financial Risk Analytics team at IHS Markit. Stuart, Abhay, welcome to you.
Abhay Pradhan: Thank you
Risk.net: Stuart, derivatives valuation has changed greatly in the last decade, with an increasing number of adjustments to the market price. And one of the main challenges that you're seeing in the xVA space right now.
Stuart Nield: So I think that the main change continues to be understanding all the costs involved in trading derivatives. So what that means is that banks need to calculate what I refer to as the classic valuation adjustments, those to do with counterparty credit risk and funding, but also newer valuation adjustments, those concerning the cost of margin and those concerning the cost of regulatory capital. So even on the classic variation adjustments, there are still challenges. If you look at something like funding valuation adjustment, or FDA, for example, we have clients that are interested in calculating that using a symmetric funding assumptions. So what that means is you need to calculate your funding requirement across the entire derivative portfolio, and then apply different curves for your borrowing and your lending spread. Now, this is something that traditional CVA systems will struggle with, because they work on a counterparty by counterparty basis. So they never form a globally scenario, consistent view of all the trades in their portfolio. So to calculate it correctly, you need to simulate all your risk factors, value all your trades, and then roll them up for your entire derivative book. And that involves hundreds of thousands of trades, and processing trade level mark to market valuation matrices. So you can see this is a large compute and data challenge. Coming on to some of the newer valuation adjustments, if we look at MVA or margin valuation adjustments, there's increased interest in that measure, driven by the bilateral margin rules, which are rolling out over the next couple of years. And this is another heavyweight calculation, you need to calculate the initial margin on each multicolor path and each future time step.So you can see that that also involves a lot of data. Another key thing is consistency between the various valuation adjustments. If I look at MVA again, and look at how it interacts with KVA, you can see that the initial margin I mentioned previously, can be used to offset the exposure that feeds into the counterparty credit risk component of the KVA calculation. So that means to realize capital benefits of receiving initial margin, you need to have KVA and MVA in the same Monte Carlo simulation, essentially in the same system. So just one last thing to mention one final challenge, I think, on this topic of capital, the component of CVA risk, the features in KVA is changing in 2022, following the post Basel III regulatory reforms. So that means that banks are starting to look at what the capital costs would be of long dated trades they're booking today. So not only do you need to calculate CVA at the enterprise level, but you need to do it sufficiently quickly, that allows a trader to make a decision about whether to go for a particular trade.
Risk.net: Wow, that's a lot to be thinking about. And presumably, as you said, the computer and data challenge on that is quite vast pay. What does that look like from a technology perspective?
Abhay Pradhan: Right. So what Stuart's pretty much described is a big data challenge, namely, how do we compute and efficiently calculate across a large data sets in terabytes for instance, and get our risk measures in a timely fashion. When enhancing a product, we had a look and realized that the big technology companies, the likes of Facebook, Amazon, Google, Netflix, they have already solved some of these problems in their own specific domains. What we realized also is that they've open source some of their platforms and components in order for them to enhance their own products. We've basically taken those open source components and integrated them onto a stack, so that we can then focus on our core competency that's pure risk analytics. What this helps us with is a couple of things. One of them is there's an open source standard, we use Apache Hadoop, which is a pretty well recognized open source platform. It lets us run on commodity hardware, architectural concerns, such as data center resiliency, failover, being able to compute at scale is handled by the product itself. Doing that internally is quite high. The other important factor to note is that there are a lot of security fixes, bug fixes, that performance improvements that ought to happen automatically in the open source community, doing this internally, and trying to make sure our product is secure is a lot harder than just leveraging what the open source community gives us. Because we can store and cache all this data at scale, what happens with our product then is, one of the powerful features of our product is we can have close to real time, 'what if' capabilities, so traders want to know what happens if I price a new trade? How does this impact my top of the house numbers? We can do this in seconds. We can also explain all our capabilities, as Stuart mentioned, from these trade level valuation matrices, aggregating them all the way up, storing intermediate results, which weren't possible before, all the way up to the internal entity measures, this gives us powerful explained capabilities in our product.
Risk.net: That sounds like a lot to be dealing with. Looking at the practicalities, what does that mean, in terms of implementation? You know, are clients prepared to come and replace their legacy systems?
Stuart Nield: Well, I think I think typically clients have invested very heavily in their CVA infrastructure. So the key challenges they face is how do they add new measures in an efficient way? NVA will be a very good example of something like that. And I think another challenge they face is how can they answer ad hoc questions from the business with that infrastructure. A typical kind of question would be, you know, what's the capital impact if I take on this derivative portfolio for my competitor?
Risk.net: Okay, and presumably, cost must be a fairly big issue here. Are there any sort of benefits and economies that these new technologies are bringing to clients?
Abhay Pradhan: Right, so traditionally, there have always been two main costs involved in any XVA system. One of them is infrastructure cost, the provisioning the purchase and the maintenance of a large scale compute grid, which then you always have to cater to the worst case - you can run a specific workload, and that's about it. Any experimental requests, like if Stuart wants to run a large-scale request, it's just not possible because the hardware is constrained because you made a one-off purchasing cost. The other cost is a software cost where you end up with a monolithic piece of software. That is, it's a black box to clients, and they have to reengineer all the systems to architect and fit into this monolithic piece of software, the client often loses a lot of flexibility in that. Our products, try and mitigate both these costs. The way we've done that one is by moving to the cloud, we are cloud native in which I mean that we are agnostic of any vendor, whether that's Amazon Web Services, or Microsoft Azure, or Google Cloud Platform. Accord runs on that without any change. We've containerized all our code, which means that we can deploy, we can spin up a cluster in minutes, and we can start running our analytical simulations and tear it down when we are done. Experimental requests are easy, because we just need to scale the hardware up and down based on the request - we only pay for what we use. On the software side we rely on all our components have a robust API, they're open. And this allows clients to plug and play and pick exactly what they want to use. They might have their own simulation models, their own pricing models that they have a lot of IP and all they want is our analytical engine, which does the aggregations. We give a well-defined schema and an API so they can reuse what they have, and just use our aggregation layer to be able to run analytics. This gives clients flexibility, which I think is very important.
Risk.net: Okay. Suppose that the question that throws up is kind of what's next, what does the future look like? What's coming and what should banks be thinking about as they prepare for the future?
Stuart Nield: Yeah, well, I think I think a key thing the banks want to do is they want to optimize across the various costs that they incur in derivatives, as we've touched on, the only way you can do that really is by having all the measures in a single place, which brings up these compute challenges and also these, these challenges around how you integrate with your existing infrastructure. So I think some of the technologies that have is mentioned today we'll see more and more in the x VA space.
Risk.net Stuart, Abhay thanks for sharing your thoughts. And thank you for watching. You can find out more on this topic by searching IHS Markit xVA at risk.net