You cannot select more than 25 topics
Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
199 lines
6.9 KiB
Plaintext
199 lines
6.9 KiB
Plaintext
time to plan
|
|
|
|
terms
|
|
|
|
RLC - reactor level coordinator (Beagleboard)
|
|
RH - Reactor Handler (goroutine)
|
|
SH - sensor handler (gourtine)
|
|
Reactor Side:
|
|
|
|
needs
|
|
- way to discover active sensors
|
|
- spin up goroutine for each sensor responsible for keeping status and logs
|
|
- way to read back and truncate logs for safe data delivery to servr
|
|
- routing requests from reactor level coordinator to relevant sensor
|
|
- internal memory sharing and channels for reactor level coordination
|
|
|
|
thoughts
|
|
- maybe the RLC can be responsible for packaging data for coordinator response
|
|
adv:
|
|
- clears up the network
|
|
- simplifies pinging
|
|
- keeps the data aributrary
|
|
cons:
|
|
- unknown data size
|
|
- how to coordinate data structure
|
|
|
|
Server Side:
|
|
|
|
needs
|
|
- way to look into a config file for active reactors
|
|
- should then spin up a goroutine for each reactor
|
|
- responsible for recovery and consistent communication
|
|
- individual database entries
|
|
- API?
|
|
- use gRPC for comms between server and BB
|
|
- each reactor handler needs mechanism for pinging, recovery, and database correctness
|
|
-
|
|
|
|
message PingRequest {
|
|
// do we even need anything in a ping request?
|
|
}
|
|
|
|
message PingResponse {
|
|
repeated Sensor sensor = 1;
|
|
}
|
|
|
|
message Sensor {
|
|
string type = 1;
|
|
bool status = 2;
|
|
byte data = 3;
|
|
}
|
|
|
|
sensors := [string]q
|
|
|
|
6/23 TODO:
|
|
|
|
X- BBB mem fix
|
|
- 32 gig for the main but where to put the OS?
|
|
- obv in EMMC but how to init sd card? (probably dev tree :( )
|
|
Y- Server side impl
|
|
Y - Need a struct for the RC
|
|
X - Should we store and load configs based on IDs? (efficiency of this vs performance increases i.e. bandwidth vs storage)
|
|
Y/X - Who should kill the RC and how do we know its dead? (Garbage collection to the rescue hopefully)
|
|
|
|
X- UPDATE PRES
|
|
- Add bottle necks for each part in that section
|
|
- I2C: 128 addrs and ~90 bytes/s per device at 128 devs optimally
|
|
- BB: Hardware is upgradeable even customizable ;)
|
|
- Server: Its overkill as is, can benchmark with a rudementary go overload once its completed
|
|
- Sensor configs
|
|
- how to store sensor info efficiently and searchably lol
|
|
- who needs to know what the sensor is? (Just the SM? Even the SM?)
|
|
X- TUI
|
|
- pls this would be so sick
|
|
TODO: 6-24
|
|
|
|
Y - Pres stuff from yesterday + python gRPC abstraction
|
|
Y - RPI flash
|
|
- Add resiliance to coordinator process (aka error handley blech)
|
|
|
|
|
|
TODO 6/27
|
|
- Time to tackle sensor managers officially
|
|
- to hell with port generation
|
|
- going to use channels but not like a jackass
|
|
- going to try generating channels interface side but via implicit types to avoid the interface stff
|
|
- should set up a structure where I can use arbiturary types on the backend and fulfill methods to get/infer information on the frontend
|
|
- rewrite I2C interface to employ same method, should allow for this
|
|
1) generate type
|
|
2) send it to worker
|
|
3) receive back (original? copy?)
|
|
4) use interface methods to get required values
|
|
- should simplify all internal communication and potentially suggests api for implementation
|
|
|
|
TODO 6/28
|
|
- It works... kind of
|
|
- I learned a lot about
|
|
"the smaller the interface, the more useful it is"
|
|
-Y time to tackle the server side error handleing aka reconnect
|
|
- custom backoff? Max TO? Sleep Period?
|
|
5ms -> 10ms -> 25ms -> 50ms -> 100ms -> 250ms -> 500ms -> 1s -> 1s
|
|
-Y Actual logic?
|
|
1) Create listener
|
|
2) create reactor managers for each reactor
|
|
a) whose job is it to kill on disconnect? Should we kill?
|
|
b) the RM will have the actual ping mechanism with coordinator aggregating in eventual TUI
|
|
3) reactivated reactors should connect to the same RM to resume connections with less downtime. Memory use be damned (ALLOCATED?? ID VS IP)
|
|
4) need a way to purge manually disconnected reactors
|
|
a) also should check ids which are deterministic and map actual hardware
|
|
5) continue forever (for select??)
|
|
-Y RM Logic?
|
|
1) on spawn ping reactor to get initial sensor status
|
|
2) enter ping loop to get status and keep heartbeat alive
|
|
3) no action on sensor going down (unless method but duh)
|
|
4) on reactor going down
|
|
1) save to config?
|
|
2) "sleep mode"
|
|
i.e. stop pinging and wasting resources doing stuff
|
|
3) wait for coordinator to reactivated
|
|
5) reactivation: call start and resume pinging
|
|
- RM Struct?
|
|
- needs to know
|
|
- ip:port of reactor
|
|
- id of reactor
|
|
- mapping of sensors to status
|
|
- last seen for reactor (and sensor?)
|
|
- needs to be able to
|
|
- ping reactor for status
|
|
- get status
|
|
- store and update status reliabily
|
|
- stop pinging on down detection
|
|
- detection outages
|
|
- respond to coordinator requests? (rpc or method most likely?)
|
|
- relies on
|
|
- SM for sensor status
|
|
- implements
|
|
- start/restart mechanism for coordinator
|
|
- ping response for coordinator
|
|
- Coordinator Struct?
|
|
- needs to know
|
|
- mapping of ids of connected reactors to RM
|
|
- its own ip:port to serve listener on
|
|
- internal mapping of system? (any efficiency benifiets here vs mem usage?)
|
|
- needs to be able to
|
|
- setup new RMs on incoming connections
|
|
- call existing RMs on incoming connections
|
|
- ping for reactor status from RMs
|
|
- store status for TUI?
|
|
- relies on
|
|
- RM for reactor status
|
|
- implements
|
|
- application logic
|
|
|
|
That went surprisingly well... Same method for tui
|
|
|
|
process outline:
|
|
TUI
|
|
1) display TUI outline and connect to coordinator
|
|
2) let coordinator know where to send reactor/sensor changes
|
|
3) enter loop of checking for changes and drawing
|
|
4) on quit should gracefully exit
|
|
|
|
Coordinator
|
|
1) on TUI connection start routine
|
|
2) collect? sensor/reactor info and send init to TUI
|
|
3) upon changes? send to TUI
|
|
4) exit when TUI connection closes
|
|
|
|
|
|
- TUI struct
|
|
- needs to know
|
|
- all the tui stuff (duh)
|
|
- reactor/sensor status
|
|
- needs to be able to
|
|
- create (several instances of) a TUI
|
|
- receive and display up to date system info
|
|
- delete reactors/sensors
|
|
- be efficient
|
|
- i know this is broad but bear with me
|
|
- relies on
|
|
- coordinator struct for system info
|
|
- coordinator struct to fulfil delete request
|
|
- implements
|
|
- tui
|
|
- user interface or management
|
|
- Coordinator Struct
|
|
- needs to know
|
|
- all the sensor and reactor states
|
|
- needs to be able to
|
|
- know what the TUI knows/doesnt know
|
|
- efficiently notify tui of change in system
|
|
- notify sensors or reactors of removal
|
|
- relies on
|
|
- rm/sm to implement reactor/sensor removal
|
|
- rm for reactor/sensor status
|
|
- implements
|
|
- sender of system status for TUI
|
|
|