You cannot select more than 25 topics
Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
310 lines
10 KiB
Plaintext
310 lines
10 KiB
Plaintext
time to plan
|
|
|
|
terms
|
|
|
|
RLC - reactor level coordinator (Beagleboard)
|
|
RH - Reactor Handler (goroutine)
|
|
SH - sensor handler (gourtine)
|
|
Reactor Side:
|
|
|
|
needs
|
|
- way to discover active sensors
|
|
- spin up goroutine for each sensor responsible for keeping status and logs
|
|
- way to read back and truncate logs for safe data delivery to servr
|
|
- routing requests from reactor level coordinator to relevant sensor
|
|
- internal memory sharing and channels for reactor level coordination
|
|
|
|
thoughts
|
|
- maybe the RLC can be responsible for packaging data for coordinator response
|
|
adv:
|
|
- clears up the network
|
|
- simplifies pinging
|
|
- keeps the data aributrary
|
|
cons:
|
|
- unknown data size
|
|
- how to coordinate data structure
|
|
|
|
Server Side:
|
|
|
|
needs
|
|
- way to look into a config file for active reactors
|
|
- should then spin up a goroutine for each reactor
|
|
- responsible for recovery and consistent communication
|
|
- individual database entries
|
|
- API?
|
|
- use gRPC for comms between server and BB
|
|
- each reactor handler needs mechanism for pinging, recovery, and database correctness
|
|
-
|
|
|
|
message PingRequest {
|
|
// do we even need anything in a ping request?
|
|
}
|
|
|
|
message PingResponse {
|
|
repeated Sensor sensor = 1;
|
|
}
|
|
|
|
message Sensor {
|
|
string type = 1;
|
|
bool status = 2;
|
|
byte data = 3;
|
|
}
|
|
|
|
sensors := [string]q
|
|
|
|
6/23 TODO:
|
|
|
|
X- BBB mem fix
|
|
- 32 gig for the main but where to put the OS?
|
|
- obv in EMMC but how to init sd card? (probably dev tree :( )
|
|
Y- Server side impl
|
|
Y - Need a struct for the RC
|
|
X - Should we store and load configs based on IDs? (efficiency of this vs performance increases i.e. bandwidth vs storage)
|
|
Y/X - Who should kill the RC and how do we know its dead? (Garbage collection to the rescue hopefully)
|
|
|
|
X- UPDATE PRES
|
|
- Add bottle necks for each part in that section
|
|
- I2C: 128 addrs and ~90 bytes/s per device at 128 devs optimally
|
|
- BB: Hardware is upgradeable even customizable ;)
|
|
- Server: Its overkill as is, can benchmark with a rudementary go overload once its completed
|
|
- Sensor configs
|
|
- how to store sensor info efficiently and searchably lol
|
|
- who needs to know what the sensor is? (Just the SM? Even the SM?)
|
|
X- TUI
|
|
- pls this would be so sick
|
|
TODO: 6-24
|
|
|
|
Y - Pres stuff from yesterday + python gRPC abstraction
|
|
Y - RPI flash
|
|
- Add resiliance to coordinator process (aka error handley blech)
|
|
|
|
|
|
TODO 6/27
|
|
- Time to tackle sensor managers officially
|
|
- to hell with port generation
|
|
- going to use channels but not like a jackass
|
|
- going to try generating channels interface side but via implicit types to avoid the interface stff
|
|
- should set up a structure where I can use arbiturary types on the backend and fulfill methods to get/infer information on the frontend
|
|
- rewrite I2C interface to employ same method, should allow for this
|
|
1) generate type
|
|
2) send it to worker
|
|
3) receive back (original? copy?)
|
|
4) use interface methods to get required values
|
|
- should simplify all internal communication and potentially suggests api for implementation
|
|
|
|
TODO 6/28
|
|
- It works... kind of
|
|
- I learned a lot about
|
|
"the smaller the interface, the more useful it is"
|
|
-Y time to tackle the server side error handleing aka reconnect
|
|
- custom backoff? Max TO? Sleep Period?
|
|
5ms -> 10ms -> 25ms -> 50ms -> 100ms -> 250ms -> 500ms -> 1s -> 1s
|
|
-Y Actual logic?
|
|
1) Create listener
|
|
2) create reactor managers for each reactor
|
|
a) whose job is it to kill on disconnect? Should we kill?
|
|
b) the RM will have the actual ping mechanism with coordinator aggregating in eventual TUI
|
|
3) reactivated reactors should connect to the same RM to resume connections with less downtime. Memory use be damned (ALLOCATED?? ID VS IP)
|
|
4) need a way to purge manually disconnected reactors
|
|
a) also should check ids which are deterministic and map actual hardware
|
|
5) continue forever (for select??)
|
|
-Y RM Logic?
|
|
1) on spawn ping reactor to get initial sensor status
|
|
2) enter ping loop to get status and keep heartbeat alive
|
|
3) no action on sensor going down (unless method but duh)
|
|
4) on reactor going down
|
|
1) save to config?
|
|
2) "sleep mode"
|
|
i.e. stop pinging and wasting resources doing stuff
|
|
3) wait for coordinator to reactivated
|
|
5) reactivation: call start and resume pinging
|
|
- RM Struct?
|
|
- needs to know
|
|
- ip:port of reactor
|
|
- id of reactor
|
|
- mapping of sensors to status
|
|
- last seen for reactor (and sensor?)
|
|
- needs to be able to
|
|
- ping reactor for status
|
|
- get status
|
|
- store and update status reliabily
|
|
- stop pinging on down detection
|
|
- detection outages
|
|
- respond to coordinator requests? (rpc or method most likely?)
|
|
- relies on
|
|
- SM for sensor status
|
|
- implements
|
|
- start/restart mechanism for coordinator
|
|
- ping response for coordinator
|
|
- Coordinator Struct?
|
|
- needs to know
|
|
- mapping of ids of connected reactors to RM
|
|
- its own ip:port to serve listener on
|
|
- internal mapping of system? (any efficiency benifiets here vs mem usage?)
|
|
- needs to be able to
|
|
- setup new RMs on incoming connections
|
|
- call existing RMs on incoming connections
|
|
- ping for reactor status from RMs
|
|
- store status for TUI?
|
|
- relies on
|
|
- RM for reactor status
|
|
- implements
|
|
- application logic
|
|
|
|
That went surprisingly well... Same method for tui
|
|
|
|
process outline:
|
|
TUI
|
|
1) display TUI outline and connect to coordinator
|
|
2) let coordinator know where to send reactor/sensor changes
|
|
3) enter loop of checking for changes and drawing
|
|
4) on quit should gracefully exit
|
|
|
|
Coordinator
|
|
1) on TUI connection start routine
|
|
2) collect? sensor/reactor info and send init to TUI
|
|
3) upon changes? send to TUI
|
|
4) exit when TUI connection closes
|
|
|
|
|
|
- TUI struct
|
|
- needs to know
|
|
- all the tui stuff (duh)
|
|
- reactor/sensor status
|
|
- needs to be able to
|
|
- create (several instances of) a TUI
|
|
- receive and display up to date system info
|
|
- delete reactors/sensors
|
|
- be efficient
|
|
- i know this is broad but bear with me
|
|
- relies on
|
|
- coordinator struct for system info
|
|
- coordinator struct to fulfil delete request
|
|
- implements
|
|
- tui
|
|
- user interface or management
|
|
- Coordinator Struct
|
|
- needs to know
|
|
- all the sensor and reactor states
|
|
- needs to be able to
|
|
- know what the TUI knows/doesnt know
|
|
- efficiently notify tui of change in system
|
|
- notify sensors or reactors of removal
|
|
- relies on
|
|
- rm/sm to implement reactor/sensor removal
|
|
- rm for reactor/sensor status
|
|
- implements
|
|
- sender of system status for TUI
|
|
|
|
TODO 6/29
|
|
- refactoring
|
|
- creating general listener and then a coordinator for reactor/tui that uses listener to field incoming requests
|
|
- change update loops for status to only send new data or empty messages for pings
|
|
- tui
|
|
- creating coordinator/manager for TUI clients
|
|
- create update loop for tui clients
|
|
- grpc to tui client with updates and hb
|
|
- drawfunc loops over change buffer
|
|
- on disconnect should display static terminal with offline warning
|
|
|
|
- Listener Struct
|
|
- needs to know
|
|
- IP
|
|
- Port
|
|
- needs to be able to
|
|
- respond to incoming gRPC
|
|
- create corrisponding manager // taken care of in the actual handler
|
|
- relies on
|
|
- grpc for the endpoint stuff
|
|
- manager impl
|
|
- implements
|
|
- listener for main func
|
|
- manager generator essentially
|
|
|
|
coordinator should be seperate ***
|
|
- new coordinator struct
|
|
- listener sends new connections to coordinator who appends to internal registery and reacts
|
|
- needs to know
|
|
- incoming:
|
|
- ip
|
|
- port
|
|
- client type?
|
|
- needs to be able to
|
|
- wait for incoming connections
|
|
- create a new manager for the client
|
|
- update internal directory
|
|
- remove entries ?? (tui -> reactor)
|
|
- implements
|
|
- manager creation
|
|
- connection handling
|
|
- client hb
|
|
- relies on
|
|
- manager impl
|
|
- listener call?
|
|
|
|
|
|
alright complete redesign
|
|
server acts as singular listener
|
|
routes all requests to a central coordiantor
|
|
this calls cooresponding client coordinator which then initiates managers etc
|
|
|
|
now redesinging sensor info
|
|
|
|
new fmt
|
|
1) have a seperate long running coordinator routine responsible for a "changed" list of sensors
|
|
2) on reactor status request:
|
|
a) send the changed list
|
|
b) if the send was acknowledged purge the change list
|
|
* both must be atomic
|
|
|
|
new rlc struct
|
|
|
|
- needs to know
|
|
- sensors connected
|
|
- changes in sensors
|
|
- how to establish connection to central server
|
|
- needs to be able to
|
|
- track changes in its system layout
|
|
- atomically relay these to the cc
|
|
- depends on
|
|
- I2C sensor info
|
|
- implements
|
|
- reactor sensor change tracking
|
|
|
|
new general coord
|
|
|
|
What does it need to do?
|
|
- Needs to respond to incoming clients from listener
|
|
- what does this mean?
|
|
- needs to link a client to a manager
|
|
- this is the whole high level idea
|
|
- can everything else be abstracted away? Why not?
|
|
- how to link? Channel? Shared Memory?
|
|
- channel is probably the best choice here
|
|
- structure:
|
|
managers
|
|
[uint32] : chan<- bool
|
|
10292133 : chan<- bool
|
|
- how to use?
|
|
- when a client connects we search to see if a channel exits
|
|
- if it does we just send "true" down the channel to signal that the client connected
|
|
- if we dont we create a channel and a manager and start the manager
|
|
- we then send true down the newly created channel
|
|
-Do we ever close the channel?
|
|
- shouldn't that would mean the program is done accepting connections (could be a useful behavior in the future)
|
|
- do we ever send false?
|
|
- no, not sure what that would signal.
|
|
- coordinator is specifically only designed to create/notify managers of a client connection
|
|
|
|
formally, New Coordinator:
|
|
- needs to know
|
|
- client type
|
|
- needs to be able to
|
|
- either notify or create and notify a manager on client connection
|
|
- handle concurrency
|
|
- relies on
|
|
- listener to send incoming clients
|
|
- manager to implement actual manager
|
|
- implements
|
|
- manager activation and memory
|