You cannot select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

5.4 KiB

FRMS

ForeLight Reactor Management System

Time to get to buisness

Notes

Main design principles

  • Flexible
  • Reliable
  • Scalable

Those principles guide our design choices

Flexibile

- System should support any reactor config
- should be able to change sub packages in isolation
    - portability of packages i.e. can swap databases or sub in testing packages for memory based storage

Realiable

- should support any # of reactors failing (including network related failures)
- should provide log and database correctness via atomic commits
- automatic log recovery and reactor functioning despite sub-system or network failures
- 100% uptime and seamless updates **goal

Scalable

- Add and use sensor packages at will
- Group reactors logically regardless of physical location
- Scale infastructure to efficiently support any # of reactors

Brainstorming

Everything here is subject to change

At first I want to focus on the "Backend" of the system which I consider to be everything from the server down to the sensors. This part of the system would laeve me with a central database of values which would then let me do whatever I wanted on the front end.

In order to make the backend more manageable, and hopefully build a more modular system, I want to desing the backend in stages and through the use of interfaces and independent packages. This codebase is sure to be refactored many times over its lifetime, but I hope ot not have to stray far from the initial design. I will be attempting to follow the Golangs core design philosophy at least as best as I understand it.

Getting into the core of the plan,I hope to layout Phase 1 and create a working document that I can reference and change. The plan will be as follows

Phase 1 The goal of phase 1 is to create a working backend of the sytem with a few basicpackages i.e. temperature DO etc. Phase 2 Phase 2 implements a frontend control and user management service where the rest of the company can view and update reactor parameters in real time Phase 3 It is now that I can revist the back end and start to create a private infastructure to group reactors logically based on an internal structure allowing for remote reactor control.

During any of these phases the expectation is that I go back and tweak and repair things as needed. The software is built with the goal and expectation that repairs to subpackages should not impact other packages implementations beyond the nessecary changes.

Now lets get into some more concrete design choices

On the server there should be several docker containers running to help keep the application streamlined

The main container will create the coordinator process also responsible for creating the sub management processes. This archetecture will be expanded upon in further sections but in essence will allow for easier implementation of the skeleton and will force the sensor data aquisition to be smarter at the application layer.

There will also be a container responsible for databsase manipulations and this could and should be a seperate package. The choice of a packge allows us to create a memory based package for easier testing.

The frontend will also have a host of other containers but those will be covered in phase 2

Starting from the coordinator; at intiial boot the coordinator will search a central reactor config file that will have all of the active reactor IP addresses. Phase 1 assumes control at the network level and thus we can use IP addresses as identification. This limits us to 255 active reactors, but we can expand this to 256256256 by using the entire private address space in more clever Identification schemas. 256 will be suitable for now *as a side note, the coordinator could also send a discovery ping to add reactors dynamically, but this is harder to implement and should be saved for phase 3 For each reactor, the coordinator will spin up a subordinate reactor level cooridnator in a goroutine. This sub coordinator will be respobsible for retriving a reactors active sensors and creating the associated goroutines for each aquisition. This choice more easily fits into our package structure and also allows for very clean reactor information as we can easily tell which sensors are active and up to date or recovering after a network outage. For each active sensor, the subcoordiantor will call the associated package and create a gourtine for this sensor. This long running routine will respond to gRPC calls from either coordinator (this may be unnesecary and we might end up killing subcoordiantor after reactor init) to update sensor function and make sure that central databsaes are up to date. We can simplify log recovery based on a custom logging schema (term and seq based?) and create atomic logging practice based on checks betwen the coordinator and local reactor. At the reactor level, a central coordinator will be resposble for creating goroutines for each active sensor based on config at boot and responding to incoming RPC calls. Each goroutine for the sensors will be in charge of polling the sensors and storing in a temporary log file based on custom interfaces. This log file will have to be cleaned and we should look into 16gb or even large flash storage options. Each subprocess will handle associated RPC calls to update sensor values and respond to requests for log info. This will probably have to be sufficiently vague or unique to each sensor type