Using DevOps With Multi-Cloud Kubernetes To Build A Hedge Fund Trading Signal Platform
Say goodbye to monolithic way of developing applications. Leverage the power of Containers & Kubernetes.
Hedge Funds use trading signals to chart out investment and redemption strategies. To create these trading signals, sophisticated funds rely on alternative external datasets.
Our client was building a suite of Python applications to create the said trading signals platform. On top of that, to truly empower it, the client planned to build big data processing pipelines.
The client was suffering from a slow-release process, flawed and complex code, and a lack of coordination among the teams. The client was fed up with the manual fixing of the newly deployed codes.
Our objectives were –
- Adding functionalities to the application faster.
- Reducing bugs and making the application stable.
- Making the application deployment fast and environment agnostic.
- Azure DevOps for software lifecycle management
- Microsoft Teams for documentation
- Azure DevOps Git Private Repo as the source code repository
- Azure DevOps Pipeline for Continuous Integration and Deployment of new codes with automated tests.
- Azure Container Registry
We use proven core components to provide customised solutions for our clients.
Our solutions are based on industry proven technologies and platforms.
On top of that, we use Factoryt Model to chisel out the best possible solution for our clients. The Factory Model, ensures faster execution and development.
When the client approached us, the first thing that we noticed was that there was a lack of unification within the team.
When a new code broke something, friction between teams became inevitable. It was unclear who was responsible (owner) of what sub-projects.
New codes were not adequately tested. Very often, they lacked dependencies. As a result, they worked on the “developer computer” (yes, that age-old excuse!). But those codes would break the application in the testing environment.
Even if the codes somehow worked, the core components of the applications – the scripts, software, and services – were not tightly integrated.
So we needed to look at the whole project from two different perspectives. One, we had to come up with something that would mitigate the fractions within the team. Only then could they focus on creating quality codes.
Secondly, functionality releases were relatively slow to come. The entire development process was not agile. We needed to quicken the pace of code-commit. We also aimed to cut down deployment time by ensuring that the new software releases had all the dependencies so that they would not get flagged during the testing process.
At its core, Cilio Automation Factory believes in the philosophy that when you elevate employee experience, the customer-experience will automatically be elevated.
How It Works
Deliver Only Exceptional Quality, And Improve!
Our client wanted us to restructure the application development process. They wanted to add functionalities faster. They also needed to quicken the pace of new code deployment. Overall, their dream was to make the application bug free and stable.
Azure DevOps for lifecycle management, Microsoft Teams, Azure DevOps Git Private Repo, Azure DevOps Pipeline for Continuous Integration and Deployment of new codes, Azure Container Registry. We also leveraged Docker & Kubernetes.
We built CI/CD platform with Azure Pipelines. For project management, we followed SCRUM. Then we dockerized the dependencies. We used Kubernetes to orchestrate the containers.
More happier employees, which resulted in reduced attrition by 40%. The application no longer suffered from the lack of dependencies. Bugs are now detected before release. The client can now target multiple deployments in a day. Scalability is no longer an issue.
Noticing the twofold work of organizing the client's internal team and making the software development agile, we immediately identified Azure DevOps as the preferred development platform.
We leveraged Azure Pipelines to build the Continuous Integration and Continuous Deployment Pipeline for our client. This is where it gets interesting. We chose GitHub as the code repository. Since Microsoft now owns GitHub, the integration between Azure Pipelines and GitHub has become smoother.
This setting served two purposes for us in this case. One, with CI/CD in place, we can now identify who exactly broke the code, effectively mitigating any blame game. Secondly, the client won’t be suffering from any frustration because if the testing fails, they can use git-revert to roll back to a functional state.
The client was suffering from a disjointed team along with a slow workflow. So, we decided to leverage SCRUM-based project management to ensure a speedy and friendly way of development. Once again, we did not have to look for external tools. Microsoft Team had everything in it to facilitate this SCRUM-based project planning and management.
Once the backbone of the development process was created, a three-member team from AutomationFactory.AI started aiding the client with its knowledge in containerization and Kubernetes. Dockerization solved the problem of broken dependencies. Each container is now self-sufficient with all the dependencies in it.
Furthermore, at a basic level, we used Azure Container Registry to aid the deployment of containers. However, the client needed to be able to deploy the containers anywhere – not just in Azure. We rolled out a Kubernetes platform specifically customized for the client. This ensured a consistent way of orchestrating the containers at scale. And since Kubernetes is an open-source platform-agnostic system, the whole application can be set up in AWS, GCP or Azure, or even on-prem servers. We used Kubernetes Cluster to create several worker nodes to ensure that the servers do not get bottlenecked. With Python-made automation working in the background, autoscaling the nodes became effortless.