Olsen Financial Technologies uses modern software tools and techniques to best reach our goal of a high quality, efficient, toptainable code base. To that end, we use the following sets of languages, dependent upon application area:
|C++, C||Performance-critical subsystems|
|Java||Internet-ready user interfaces|
|Perl||High-level, scriptable extraction and maintenance code base tools|
In addition, we have found the following development tools extremely useful in improving the quality of our software:
|Purify©||Run-time memory use testing|
|Quantify©||Run-time performance optimization|
|PureCoverage©||Program code coverage testing|
|Perforce©||Software configuration management|
|SparcWorks©||Compilers, development and debugging tools|
Purify, Quantify and PureCoverage are registered trademarks of Rational Software Corporation.
SparcWorks is a registered trademark of Sun Microsystems, Inc.
Perforce is a registered trademark of Perforce Software, Inc.
SNiFF+ is a registered trademark of Wind River Systems, Inc.
The constantly changing nature of the financial markets demands an architectural approach that allows a software system to grow and change along with the needs of the institution. New forms of data are constantly being introduced to the market, requiring customized interpretation and processing. New analytical tools and applications come into use, with their own data and interoperability requirements. In-house networks grow, and data traffic increases. These and other factors call for a scalable, adaptable architecture for systems dealing with financial data.
In addition, today's financial institutions demand a high level of reliability and availability from their support systems, which calls for a concerted approach to fault tolerance.
At Olsen Financial Technologies we have built a framework of services based on several key concepts that lend themselves well to this kind of flexibility and reliability. These concepts include a multi-layered (multi-tiered) structure, flexible run time configuration, and a component-based approach to functionality.
A multi-layered structure enables a clean separation of responsibility between the user interface, data analysis, data collection and storage, and datafeed network worlds. Our architecture is based on the following model:
[insert diagram of model: Olsen Data Services Layered Architecture]
The Data Access Services layer provides a consistent interface to all historical and real-time data. Thus, applications falling under Value Added Services—such as filtering, volatility analysis, and forecasting models—are able to receive and process incoming market data and republish the results of their analysis, all through a consistent interface. Likewise, user applications all utilize this same interface for accessing their real-time and historical data.
External producers and consumers of data are accommodated through the Data Conversion Services layer. For example, specially configured servers called “instrument collectors” subscribe to and receive market data from external data providers, such as Reuters, Bridge, and Bloomberg, and convert it into a standardized format. This real-time data is routed by the Data Access Services layer to servers that store it in a historical repository, tick by tick, where it then becomes available for later historical requests and—at the same time—is forwarded to all interested applications.
Data from legacy systems can be handled in a similar manner simply by implementing a thin layer within Data Conversion Services to handle the conversion process.
Flexible Run Time Configuration
All services implemented at Olsen are highly configurable so that they can easily be adapted to each customer's needs.
An example of this is in the “instrument collectors” (see above) that convert real-time market data into a standardized form. Clearly, expecting one standardized form to be appropriate for all financial institutions is unrealistic. Therefore, the details of this form and the conversion process itself are completely configurable, in order that the instrument collectors can be tuned to particular needs.
It's a fact of life in software development that one can never completely foresee all the ways in which software will be used. This reality makes it very important to design software with adaptability, scalability and conceptual simplicity in mind. Our component-based approach to developing software makes it possible to implement systems that can be smoothly adapted as requirements change and easily scaled up as requirements grow.
All the services Olsen supports are implemented as a set of well-defined, cleanly interoperable components. When building a new service or set of services, or adapting an existing service to a specific customer's needs, these same components can be combined as necessary.
In order to address fault-tolerance requirements—and to support practically unlimited scalability—services can be duplicated across a network as needed. In case one service becomes temporarily unavailable, the Data Access Services layer is able to redirect clients to an alternate server that can provide an identical service. To extend the redundancy of the system, it is also possible to duplicate the underlying historical time-series storage database.
An equally important aspect of this approach is that individual components must hide the mechanisms of their implementation from other components. That is, the relationship between components should be purely interface-based. In this way, as requirements change or as new techniques are developed, the underlying mechanism delivering a given functionality can be replaced without affecting any other components.