|
|
|
time to plan
|
|
|
|
|
|
|
|
terms
|
|
|
|
|
|
|
|
RLC - reactor level coordinator (Beagleboard)
|
|
|
|
RH - Reactor Handler (goroutine)
|
|
|
|
SH - sensor handler (gourtine)
|
|
|
|
Reactor Side:
|
|
|
|
|
|
|
|
needs
|
|
|
|
- way to discover active sensors
|
|
|
|
- spin up goroutine for each sensor responsible for keeping status and logs
|
|
|
|
- way to read back and truncate logs for safe data delivery to servr
|
|
|
|
- routing requests from reactor level coordinator to relevant sensor
|
|
|
|
- internal memory sharing and channels for reactor level coordination
|
|
|
|
|
|
|
|
thoughts
|
|
|
|
- maybe the RLC can be responsible for packaging data for coordinator response
|
|
|
|
adv:
|
|
|
|
- clears up the network
|
|
|
|
- simplifies pinging
|
|
|
|
- keeps the data aributrary
|
|
|
|
cons:
|
|
|
|
- unknown data size
|
|
|
|
- how to coordinate data structure
|
|
|
|
|
|
|
|
Server Side:
|
|
|
|
|
|
|
|
needs
|
|
|
|
- way to look into a config file for active reactors
|
|
|
|
- should then spin up a goroutine for each reactor
|
|
|
|
- responsible for recovery and consistent communication
|
|
|
|
- individual database entries
|
|
|
|
- API?
|
|
|
|
- use gRPC for comms between server and BB
|
|
|
|
- each reactor handler needs mechanism for pinging, recovery, and database correctness
|
|
|
|
-
|
|
|
|
|
|
|
|
message PingRequest {
|
|
|
|
// do we even need anything in a ping request?
|
|
|
|
}
|
|
|
|
|
|
|
|
message PingResponse {
|
|
|
|
repeated Sensor sensor = 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
message Sensor {
|
|
|
|
string type = 1;
|
|
|
|
bool status = 2;
|
|
|
|
byte data = 3;
|
|
|
|
}
|
|
|
|
|
|
|
|
sensors := [string]q
|
|
|
|
|
|
|
|
6/23 TODO:
|
|
|
|
|
|
|
|
X- BBB mem fix
|
|
|
|
- 32 gig for the main but where to put the OS?
|
|
|
|
- obv in EMMC but how to init sd card? (probably dev tree :( )
|
|
|
|
Y- Server side impl
|
|
|
|
Y - Need a struct for the RC
|
|
|
|
X - Should we store and load configs based on IDs? (efficiency of this vs performance increases i.e. bandwidth vs storage)
|
|
|
|
Y/X - Who should kill the RC and how do we know its dead? (Garbage collection to the rescue hopefully)
|
|
|
|
|
|
|
|
X- UPDATE PRES
|
|
|
|
- Add bottle necks for each part in that section
|
|
|
|
- I2C: 128 addrs and ~90 bytes/s per device at 128 devs optimally
|
|
|
|
- BB: Hardware is upgradeable even customizable ;)
|
|
|
|
- Server: Its overkill as is, can benchmark with a rudementary go overload once its completed
|
|
|
|
- Sensor configs
|
|
|
|
- how to store sensor info efficiently and searchably lol
|
|
|
|
- who needs to know what the sensor is? (Just the SM? Even the SM?)
|
|
|
|
X- TUI
|
|
|
|
- pls this would be so sick
|
|
|
|
TODO: 6-24
|
|
|
|
|
|
|
|
Y - Pres stuff from yesterday + python gRPC abstraction
|
|
|
|
Y - RPI flash
|
|
|
|
- Add resiliance to coordinator process (aka error handley blech)
|
|
|
|
|
|
|
|
|
|
|
|
TODO 6/27
|
|
|
|
- Time to tackle sensor managers officially
|
|
|
|
- to hell with port generation
|
|
|
|
- going to use channels but not like a jackass
|
|
|
|
- going to try generating channels interface side but via implicit types to avoid the interface stff
|
|
|
|
- should set up a structure where I can use arbiturary types on the backend and fulfill methods to get/infer information on the frontend
|
|
|
|
- rewrite I2C interface to employ same method, should allow for this
|
|
|
|
1) generate type
|
|
|
|
2) send it to worker
|
|
|
|
3) receive back (original? copy?)
|
|
|
|
4) use interface methods to get required values
|
|
|
|
- should simplify all internal communication and potentially suggests api for implementation
|
|
|
|
|
|
|
|
TODO 6/28
|
|
|
|
- It works... kind of
|
|
|
|
- I learned a lot about
|
|
|
|
"the smaller the interface, the more useful it is"
|
|
|
|
-Y time to tackle the server side error handleing aka reconnect
|
|
|
|
- custom backoff? Max TO? Sleep Period?
|
|
|
|
5ms -> 10ms -> 25ms -> 50ms -> 100ms -> 250ms -> 500ms -> 1s -> 1s
|
|
|
|
-Y Actual logic?
|
|
|
|
1) Create listener
|
|
|
|
2) create reactor managers for each reactor
|
|
|
|
a) whose job is it to kill on disconnect? Should we kill?
|
|
|
|
b) the RM will have the actual ping mechanism with coordinator aggregating in eventual TUI
|
|
|
|
3) reactivated reactors should connect to the same RM to resume connections with less downtime. Memory use be damned (ALLOCATED?? ID VS IP)
|
|
|
|
4) need a way to purge manually disconnected reactors
|
|
|
|
a) also should check ids which are deterministic and map actual hardware
|
|
|
|
5) continue forever (for select??)
|
|
|
|
-Y RM Logic?
|
|
|
|
1) on spawn ping reactor to get initial sensor status
|
|
|
|
2) enter ping loop to get status and keep heartbeat alive
|
|
|
|
3) no action on sensor going down (unless method but duh)
|
|
|
|
4) on reactor going down
|
|
|
|
1) save to config?
|
|
|
|
2) "sleep mode"
|
|
|
|
i.e. stop pinging and wasting resources doing stuff
|
|
|
|
3) wait for coordinator to reactivated
|
|
|
|
5) reactivation: call start and resume pinging
|
|
|
|
- RM Struct?
|
|
|
|
- needs to know
|
|
|
|
- ip:port of reactor
|
|
|
|
- id of reactor
|
|
|
|
- mapping of sensors to status
|
|
|
|
- last seen for reactor (and sensor?)
|
|
|
|
- needs to be able to
|
|
|
|
- ping reactor for status
|
|
|
|
- get status
|
|
|
|
- store and update status reliabily
|
|
|
|
- stop pinging on down detection
|
|
|
|
- detection outages
|
|
|
|
- respond to coordinator requests? (rpc or method most likely?)
|
|
|
|
- relies on
|
|
|
|
- SM for sensor status
|
|
|
|
- implements
|
|
|
|
- start/restart mechanism for coordinator
|
|
|
|
- ping response for coordinator
|
|
|
|
- Coordinator Struct?
|
|
|
|
- needs to know
|
|
|
|
- mapping of ids of connected reactors to RM
|
|
|
|
- its own ip:port to serve listener on
|
|
|
|
- internal mapping of system? (any efficiency benifiets here vs mem usage?)
|
|
|
|
- needs to be able to
|
|
|
|
- setup new RMs on incoming connections
|
|
|
|
- call existing RMs on incoming connections
|
|
|
|
- ping for reactor status from RMs
|
|
|
|
- store status for TUI?
|
|
|
|
- relies on
|
|
|
|
- RM for reactor status
|
|
|
|
- implements
|
|
|
|
- application logic
|
|
|
|
|
|
|
|
That went surprisingly well... Same method for tui
|
|
|
|
|
|
|
|
process outline:
|
|
|
|
TUI
|
|
|
|
1) display TUI outline and connect to coordinator
|
|
|
|
2) let coordinator know where to send reactor/sensor changes
|
|
|
|
3) enter loop of checking for changes and drawing
|
|
|
|
4) on quit should gracefully exit
|
|
|
|
|
|
|
|
Coordinator
|
|
|
|
1) on TUI connection start routine
|
|
|
|
2) collect? sensor/reactor info and send init to TUI
|
|
|
|
3) upon changes? send to TUI
|
|
|
|
4) exit when TUI connection closes
|
|
|
|
|
|
|
|
|
|
|
|
- TUI struct
|
|
|
|
- needs to know
|
|
|
|
- all the tui stuff (duh)
|
|
|
|
- reactor/sensor status
|
|
|
|
- needs to be able to
|
|
|
|
- create (several instances of) a TUI
|
|
|
|
- receive and display up to date system info
|
|
|
|
- delete reactors/sensors
|
|
|
|
- be efficient
|
|
|
|
- i know this is broad but bear with me
|
|
|
|
- relies on
|
|
|
|
- coordinator struct for system info
|
|
|
|
- coordinator struct to fulfil delete request
|
|
|
|
- implements
|
|
|
|
- tui
|
|
|
|
- user interface or management
|
|
|
|
- Coordinator Struct
|
|
|
|
- needs to know
|
|
|
|
- all the sensor and reactor states
|
|
|
|
- needs to be able to
|
|
|
|
- know what the TUI knows/doesnt know
|
|
|
|
- efficiently notify tui of change in system
|
|
|
|
- notify sensors or reactors of removal
|
|
|
|
- relies on
|
|
|
|
- rm/sm to implement reactor/sensor removal
|
|
|
|
- rm for reactor/sensor status
|
|
|
|
- implements
|
|
|
|
- sender of system status for TUI
|
|
|
|
|
|
|
|
TODO 6/29
|
|
|
|
- refactoring
|
|
|
|
- creating general listener and then a coordinator for reactor/tui that uses listener to field incoming requests
|
|
|
|
- change update loops for status to only send new data or empty messages for pings
|
|
|
|
- tui
|
|
|
|
- creating coordinator/manager for TUI clients
|
|
|
|
- create update loop for tui clients
|
|
|
|
- grpc to tui client with updates and hb
|
|
|
|
- drawfunc loops over change buffer
|
|
|
|
- on disconnect should display static terminal with offline warning
|
|
|
|
|
|
|
|
- Listener Struct
|
|
|
|
- needs to know
|
|
|
|
- IP
|
|
|
|
- Port
|
|
|
|
- needs to be able to
|
|
|
|
- respond to incoming gRPC
|
|
|
|
- create corrisponding manager // taken care of in the actual handler
|
|
|
|
- relies on
|
|
|
|
- grpc for the endpoint stuff
|
|
|
|
- manager impl
|
|
|
|
- implements
|
|
|
|
- listener for main func
|
|
|
|
- manager generator essentially
|
|
|
|
|
|
|
|
coordinator should be seperate ***
|
|
|
|
- new coordinator struct
|
|
|
|
- listener sends new connections to coordinator who appends to internal registery and reacts
|
|
|
|
- needs to know
|
|
|
|
- incoming:
|
|
|
|
- ip
|
|
|
|
- port
|
|
|
|
- client type?
|
|
|
|
- needs to be able to
|
|
|
|
- wait for incoming connections
|
|
|
|
- create a new manager for the client
|
|
|
|
- update internal directory
|
|
|
|
- remove entries ?? (tui -> reactor)
|
|
|
|
- implements
|
|
|
|
- manager creation
|
|
|
|
- connection handling
|
|
|
|
- client hb
|
|
|
|
- relies on
|
|
|
|
- manager impl
|
|
|
|
- listener call?
|
|
|
|
|
|
|
|
|
|
|
|
alright complete redesign
|
|
|
|
server acts as singular listener
|
|
|
|
routes all requests to a central coordiantor
|
|
|
|
this calls cooresponding client coordinator which then initiates managers etc
|
|
|
|
|
|
|
|
now redesinging sensor info
|
|
|
|
|
|
|
|
new fmt
|
|
|
|
1) have a seperate long running coordinator routine responsible for a "changed" list of sensors
|
|
|
|
2) on reactor status request:
|
|
|
|
a) send the changed list
|
|
|
|
b) if the send was acknowledged purge the change list
|
|
|
|
* both must be atomic
|
|
|
|
|
|
|
|
new rlc struct
|
|
|
|
|
|
|
|
- needs to know
|
|
|
|
- sensors connected
|
|
|
|
- changes in sensors
|
|
|
|
- how to establish connection to central server
|
|
|
|
- needs to be able to
|
|
|
|
- track changes in its system layout
|
|
|
|
- atomically relay these to the cc
|
|
|
|
- depends on
|
|
|
|
- I2C sensor info
|
|
|
|
- implements
|
|
|
|
- reactor sensor change tracking
|
|
|
|
|
|
|
|
new general coord
|
|
|
|
|
|
|
|
What does it need to do?
|
|
|
|
- Needs to respond to incoming clients from listener
|
|
|
|
- what does this mean?
|
|
|
|
- needs to link a client to a manager
|
|
|
|
- this is the whole high level idea
|
|
|
|
- can everything else be abstracted away? Why not?
|
|
|
|
- how to link? Channel? Shared Memory?
|
|
|
|
- channel is probably the best choice here
|
|
|
|
- structure:
|
|
|
|
managers
|
|
|
|
[uint32] : chan<- bool
|
|
|
|
10292133 : chan<- bool
|
|
|
|
- how to use?
|
|
|
|
- when a client connects we search to see if a channel exits
|
|
|
|
- if it does we just send "true" down the channel to signal that the client connected
|
|
|
|
- if we dont we create a channel and a manager and start the manager
|
|
|
|
- we then send true down the newly created channel
|
|
|
|
-Do we ever close the channel?
|
|
|
|
- shouldn't that would mean the program is done accepting connections (could be a useful behavior in the future)
|
|
|
|
- do we ever send false?
|
|
|
|
- no, not sure what that would signal.
|
|
|
|
- coordinator is specifically only designed to create/notify managers of a client connection
|
|
|
|
|
|
|
|
formally, New Coordinator:
|
|
|
|
- needs to know
|
|
|
|
- client type
|
|
|
|
- needs to be able to
|
|
|
|
- either notify or create and notify a manager on client connection
|
|
|
|
- handle concurrency
|
|
|
|
- relies on
|
|
|
|
- listener to send incoming clients
|
|
|
|
- manager to implement actual manager
|
|
|
|
- implements
|
|
|
|
- manager activation and memory
|
|
|
|
|
|
|
|
TODO 6/30
|
|
|
|
creating efficient system mapping and data logging/structure info
|
|
|
|
|
|
|
|
idea # 1.5
|
|
|
|
use json maybe?
|
|
|
|
|
|
|
|
how?
|
|
|
|
- use json to build in the structure of our system via heirarchy
|
|
|
|
ex)
|
|
|
|
[
|
|
|
|
{
|
|
|
|
"reactor": uint32,
|
|
|
|
"status": bool,
|
|
|
|
"connected devices": [
|
|
|
|
"device" : {
|
|
|
|
"addr": "0x10"
|
|
|
|
"type": "ph sensor",
|
|
|
|
"status": uint32,
|
|
|
|
"data": [{"ph7.3"}, // json marshelling of the specific struct
|
|
|
|
},
|
|
|
|
"device" : {
|
|
|
|
"addr": "0x11"
|
|
|
|
"type": "temp sensor"
|
|
|
|
status: uint32
|
|
|
|
"data": "t24.5C"
|
|
|
|
}
|
|
|
|
]
|
|
|
|
}
|
|
|
|
]
|
|
|
|
|
|
|
|
use go structs to map components and embed them
|
|
|
|
can send
|
|
|
|
|
|
|
|
|
|
|
|
need to just spitball here
|
|
|
|
|
|
|
|
what am I trying to do at the end of the day?
|
|
|
|
I am taking sensor measurements
|
|
|
|
and potentially tweaking control paramters
|
|
|
|
lets treat each one sperately at firs
|
|
|
|
|
|
|
|
sensor measurements
|
|
|
|
|
|
|
|
each particular sensor manager will only be responsible for getting data from its sensor
|
|
|
|
what is the scope of responsibilities?
|
|
|
|
the sensor manager should log this data locally using a method? json?
|
|
|
|
|
|
|
|
how do we aggregate this info?
|
|
|
|
|
|
|
|
what if we structure our reactor as a mirror of our coordiantor
|
|
|
|
|
|
|
|
rlc job would be to
|
|
|
|
- establish connection with central server
|
|
|
|
- wait for connections from devices
|
|
|
|
- create reactor managers for these devices
|
|
|
|
|
|
|
|
this could be really nice
|
|
|
|
|
|
|
|
rm (general) job:
|
|
|
|
- establish connection with device via I2C (client via wifi)
|
|
|
|
- shut down when device connection drops
|
|
|
|
- start when device connects again
|
|
|
|
|
|
|
|
adding data responsiblities
|
|
|
|
|
|
|
|
tuim:
|
|
|
|
needs to know of a struct of system
|
|
|
|
[reactorid][deviceaddress][device]
|
|
|
|
thus needs to know:
|
|
|
|
- reactor id
|
|
|
|
- connected device addresses
|
|
|
|
- device info: can just be a string!
|
|
|
|
- made up of status and relevant data
|
|
|
|
what do we rely on
|
|
|
|
- accurate device info string
|
|
|
|
- can have someone else make/maintain struct and proxy updates
|
|
|
|
tuic:
|
|
|
|
-needs to maintain an atomic struct of system
|
|
|
|
as above
|
|
|
|
- still only needs to know
|
|
|
|
- reactor id
|
|
|
|
- connected device address maped to device info [string]
|
|
|
|
relies on
|
|
|
|
- accurate status updates
|
|
|
|
- accurate device info
|
|
|
|
|
|
|
|
RC ** could have this guy be responsible for change parsing
|
|
|
|
- respond to updated status from RM and send to TUI
|
|
|
|
- basically a focus point
|
|
|
|
|
|
|
|
RM
|
|
|
|
- needs to call corret handlers for data coming in from rlc
|
|
|
|
- can we just use grpc handlers that get embedded in the manager at start?
|
|
|
|
- handlers can also notify when data doesnt match previous entry
|
|
|
|
- this would prompt the data to be sent to the rc where it can be forwardd
|
|
|
|
|
|
|
|
RLC
|
|
|
|
- needs to have internal reactor state
|
|
|
|
-
|
|
|
|
|
|
|
|
|
|
|
|
this gives us a complete "skeleton" of service where we can connect/reconnect clients with appropriate managers
|
|
|
|
there isnt any functionality yet to actually log data
|
|
|
|
|
|
|
|
how do we leverage our current connections and add functionality to managers and coordinators?
|
|
|
|
|
|
|
|
methods and channels
|
|
|
|
|
|
|
|
each manager is responsible for pinging the associate device {reactor, device, tui}
|
|
|
|
either sending device info in tui case
|
|
|
|
or recieving it in reactor/device case
|
|
|
|
|
|
|
|
this is why wrapping the gen structures is nessecary. Two different operations
|
|
|
|
|
|
|
|
device manager:
|
|
|
|
could recieve 0-100000000 values
|
|
|
|
could be any type
|
|
|
|
could be any number per entry
|
|
|
|
common struct?
|
|
|
|
"timestamp":"data"
|
|
|
|
data could be json struct
|
|
|
|
- makes it easy to parse at some point
|
|
|
|
- force sensor driver to write a go struct for the data
|
|
|
|
- Parse___Data(*json.Unmarshalled)
|
|
|
|
|
|
|
|
|
|
|
|
complete i2c monitor redesign
|
|
|
|
|
|
|
|
i2c interface needs to do
|
|
|
|
data stuff:
|
|
|
|
- locking structure to serialize commands/reads
|
|
|
|
- removal function to manually parse dead devices
|
|
|
|
- reconnecting should reinit device manager and stuff
|
|
|
|
init stuff:
|
|
|
|
- keep track of devices seen and connected
|
|
|
|
- notify rlc of devices that connect/reconnect
|
|
|
|
build init stuff into a struct that can be embedded?
|
|
|
|
I2CCoordinator
|
|
|
|
- created when rlc is created
|
|
|
|
- tie rlc to i2ccoord via channels
|
|
|
|
- new devices channel for devices that go offline->online
|
|
|
|
- send the i2cdevice struct to embed in rm
|
|
|
|
- can call interface funcs on the embedded interface
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Eureka part 2?
|
|
|
|
we are writing all of the software libraries which means we should (with some basic cleansing) be able to just send direct database queries
|
|
|
|
this means some things which could be pros or cons
|
|
|
|
|
|
|
|
- each sensor realistically will have its own table for each reactor.
|
|
|
|
- we can group entries by reactor and look across time stamps (tidy?)
|
|
|
|
- we can log sql entries as text based backups
|
|
|
|
- we can use basic string struct with time stamps
|
|
|
|
- each sensor library will use a common struct and probably just use string fmting
|
|
|
|
- there are some efficiency benfiets if we used custom gRPC calls for each db entry
|
|
|
|
- but we can just leverage a biolerplate call with some extra overhead?
|
|
|
|
- we still need a way of representing state of components
|
|
|
|
- reactor is easy and could be kept server side
|
|
|
|
- sensor needs to be transmitted across rlc
|
|
|
|
- should default to down if the reactor goes offline (unknown?)
|
|
|
|
|
|
|
|
direct query approach
|
|
|
|
pros
|
|
|
|
- easy to implement
|
|
|
|
- easy to use interfaces for common libs (compiling efficiency)
|
|
|
|
- easy to add sensors (use common libs and just make custom string in wrapper)
|
|
|
|
- can develop logging and db parts as manager funcs
|
|
|
|
cons
|
|
|
|
- need unique daemon to parse data on server for state struct
|
|
|
|
- trusting each sensor to maintain itself
|
|
|
|
- still need a way of translating state
|
|
|
|
|
|
|
|
state problem
|
|
|
|
|
|
|
|
it just should be an enumeration
|
|
|
|
its efficeint (could be as little as 4 bits but probably 1 byte) as opposed to a string ( len(s) * byte ex "ONLINE" = 6)
|
|
|
|
- is this all worth ~1-10? bytes of savings per dev?
|
|
|
|
- 100 reactors @ ~45 sensors = 46*100 = ~4.5 kb of state or ~ 36kb if we use strings
|
|
|
|
- so maybe?
|
|
|
|
more important than memory are network calls
|
|
|
|
need to update on tui:
|
|
|
|
- state changes (hopefully less frequent)
|
|
|
|
- current value (~5 seconds - ~30 minutes)
|
|
|
|
store both client and server side
|
|
|
|
- only store actively view data client side to prevent leaks
|
|
|
|
- store full struct but only serve as request response to prevent extra copies
|
|
|
|
|
|
|
|
system struct
|
|
|
|
- mapping of reactor ids to "Reactor" structs
|
|
|
|
- reactor is mapping of addr to sensor structs
|
|
|
|
- sensor struct is basic info
|
|
|
|
- device type (enum vs string)
|
|
|
|
- device status (enum vs string)
|
|
|
|
- device most recent value (int? vs string?)
|
|
|
|
- if offline last seen time
|
|
|
|
|
|
|
|
notes on struct
|
|
|
|
- should ideally have locks at reactor and sensor level
|
|
|
|
- use func to return sensor list via lock on reactor
|
|
|
|
- use func to update sensor list via lock on reactor
|
|
|
|
- use returned list to parse and request value from each sensor
|
|
|
|
- use goroutines and channels for efficient operation
|
|
|
|
- build response via returned structs
|
|
|
|
- respond to client
|
|
|
|
|
|
|
|
note on tui manager
|
|
|
|
- ideally should keep simplified current client struct to spawn copies of the update daemons for each sensor
|
|
|
|
- each daemon should be EXTREMELY light weight and just feed new data values to the manager
|
|
|
|
- tuimanager will be responsible for efficently buffering for tui client requests
|
|
|
|
- tui pings should be frequent and response should be any data to update
|
|
|
|
- client side we should be able to essentialy overwrite any entries on our response
|
|
|
|
- simplifies interface
|
|
|
|
|
|
|
|
data aggregation outline
|
|
|
|
Starting from sensor
|
|
|
|
1) specific sensor manager initiates a read of the embedded i2c dev
|
|
|
|
2) on success read gets logged with the time to the internal txt log (json)
|
|
|
|
RLC loop:
|
|
|
|
3) rlc has long running method with sub routines reading each log and adding pending entries to the buffer
|
|
|
|
- buffer is bounded and routines block when it fills (use to limit rpc reply length)
|
|
|
|
4) on ping buffer is parsed into rpc reply
|
|
|
|
- send buffered log ids to cleanup routine but dont delete from log yet
|
|
|
|
5) next req has transaction ids of previous data that have been acked
|
|
|
|
6) send ids to cleanup process
|
|
|
|
7) respond with new buffer repeat
|
|
|
|
RM side:
|
|
|
|
received data from rlc
|
|
|
|
1) send reply to data parsing goroutine
|
|
|
|
parser loop:
|
|
|
|
1) start a completion listener
|
|
|
|
2) read each device in reply
|
|
|
|
3) start goroutine of db daemon for each dev with completion chan
|
|
|
|
4) once reply is empty can end
|
|
|
|
db daemon loop:
|
|
|
|
1) loop over device data entries
|
|
|
|
2) initiate db connection
|
|
|
|
3) parse through each data entry and send to db
|
|
|
|
4) if it was succesfull send the transaction id to the completion channel
|
|
|
|
monitoring rpc loop:
|
|
|
|
1) listen for completed transaction entries
|
|
|
|
2) append entries to ack
|
|
|
|
3) send to rm on ping timer
|
|
|
|
|
|
|
|
Data is now in database for all intents and purposes
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
process ensures that the data is collected
|
|
|
|
now the data is on the server
|
|
|
|
6) server sends grpc reply results to a parsing gorotuine
|
|
|
|
7) the parser loops over reply and spawns db daemons to enter info
|
|
|
|
|
|
|
|
|
|
|
|
should we even aggregate data? why would we not just write a db client as part of the rlc and let the sensor managers themselves log
|
|
|
|
|
|
|
|
need to focus:
|
|
|
|
|
|
|
|
2 major things going on
|
|
|
|
|
|
|
|
rlc can do data stuff on the reactor itself and just use the db client
|
|
|
|
- relies on exposed db endpoint but can just handle auth stuff
|
|
|
|
- can log locally
|
|
|
|
|
|
|
|
rlc also responds to status requests
|
|
|
|
- queries sensors for status
|
|
|
|
- adds to and sends reply
|
|
|
|
- recieves these pings <= 5 seconds apart
|
|
|
|
- should have down detection to kill db actions
|
|
|
|
- optionally include a "data" string of the most recent reading
|
|
|
|
|
|
|
|
going to focus on status
|
|
|
|
want system to
|
|
|
|
init reactors
|
|
|
|
poll for status
|
|
|
|
respond with sensor info
|
|
|
|
view and manage on tui
|
|
|
|
|
|
|
|
how?
|
|
|
|
|
|
|
|
all structs only in status context
|
|
|
|
|
|
|
|
rlc struct
|
|
|
|
- knows
|
|
|
|
- connected devs and managers
|
|
|
|
- is able to
|
|
|
|
- poll managers for state info
|
|
|
|
- relies on
|
|
|
|
- managers for accurate and fast data
|
|
|
|
- implements data aggregation for rm
|
|
|
|
|
|
|
|
dm struct
|
|
|
|
- knows
|
|
|
|
- underlying i2c dev interface
|
|
|
|
- basic device info
|
|
|
|
- is able to
|
|
|
|
- respond to rlc status requests
|
|
|
|
- relies on
|
|
|
|
- rlc to ask for status
|
|
|
|
- implements
|
|
|
|
- status response
|
|
|
|
|
|
|
|
|
|
|
|
alright holy shit
|
|
|
|
i have rewritten the same five functions so many times
|
|
|
|
|
|
|
|
time to take inventory
|
|
|
|
|
|
|
|
right now the system has
|
|
|
|
a central coordinator
|
|
|
|
that can spawn managers
|
|
|
|
that can manage clients
|
|
|
|
and a reactor coordinator
|
|
|
|
that can spawn device managers
|
|
|
|
that can manage devices
|
|
|
|
|
|
|
|
I have a rudimentary ping system that queries the reactors for their status
|
|
|
|
|
|
|
|
where to go next
|
|
|
|
|
|
|
|
I want to build in control before I worry about actual data
|
|
|
|
this means tui and removal of managers
|
|
|
|
|
|
|
|
how to start?
|
|
|
|
|
|
|
|
need to create a central struct that serves as a representation of the system
|
|
|
|
|
|
|
|
map[reactorid] -> [dev addr] device info
|
|
|
|
reactor is online by nature of responding to a ping
|
|
|
|
device info is what we query for
|
|
|
|
|
|
|
|
tui manager will request data from the server struct
|
|
|
|
|
|
|
|
server struct should bridge between reactor coordiantor and terminal coordinator
|
|
|
|
needs to be concurrent
|
|
|
|
needs to synchronize incoming sensor data
|
|
|
|
|
|
|
|
instead of a big stupid struct
|
|
|
|
- just have rm maintain struct for each reactor
|
|
|
|
- connect tui requests to rm
|
|
|
|
|
|
|
|
pros
|
|
|
|
- prevent redundancies in data
|
|
|
|
- limit operations after the ping
|
|
|
|
- serve copies?
|
|
|
|
|
|
|
|
what will this look like
|
|
|
|
|
|
|
|
TODO 7/5
|
|
|
|
time to flesh out the tui and move into actual sensor/db libraries
|
|
|
|
|
|
|
|
tuitime:
|
|
|
|
tui client (user side)
|
|
|
|
*will most likely be divided
|
|
|
|
needs to know:
|
|
|
|
- ip address of host
|
|
|
|
- eventually
|
|
|
|
- username/pass auth
|
|
|
|
needs to be able to
|
|
|
|
- request system info via server
|
|
|
|
- establish connection to central server
|
|
|
|
implements
|
|
|
|
- basic system management functionality
|
|
|
|
relies on
|
|
|
|
|
|
|
|
- server for up to date reactor and sensor data
|
|
|
|
|
|
|
|
|
|
|
|
TUI TIME
|
|
|
|
coordinator/listner/reactor seem to be stable
|
|
|
|
- will have to add exiting for tui manager
|
|
|
|
|
|
|
|
need to create the actual TUI at this point
|
|
|
|
seperate into two logical aspects
|
|
|
|
- The client that gets the system info from the server
|
|
|
|
- the tui which actually displays this info into the tui
|
|
|
|
|
|
|
|
how to share the data between the client and tui?
|
|
|
|
- structs
|
|
|
|
- pros
|
|
|
|
- very efficient
|
|
|
|
- cons
|
|
|
|
- risky
|
|
|
|
- chan
|
|
|
|
- pros
|
|
|
|
- synchronize stuff
|
|
|
|
- cons
|
|
|
|
- hard to set up and finnicky
|
|
|
|
- methods
|
|
|
|
- pros
|
|
|
|
- syncronized
|
|
|
|
- easy to implement
|
|
|
|
- cons
|
|
|
|
- need to store/embed struct
|
|
|
|
|
|
|
|
systemviewer.go
|
|
|
|
TODO 7/7
|
|
|
|
time to make the synchronous system view
|
|
|
|
have a few ideas for the approach
|
|
|
|
a) make RM responsible for maintaining their own "branch" and store pointers to their branch
|
|
|
|
b) store the branch centrally and have the RM post changes
|
|
|
|
|
|
|
|
I like the idea of a central system viewer and you post updates to worker clients
|
|
|
|
a) posting updates vs calling methods
|
|
|
|
blocking vs non blocking
|
|
|
|
b)
|
|
|
|
|
|
|
|
lets layout expectations
|
|
|
|
RM should keep a current view of itself and whether it is online
|
|
|
|
- this should persist despite reactor outage
|
|
|
|
- in case of outage every sensor is UKNOWN
|
|
|
|
- optional last seen time for sensors/reactor
|
|
|
|
- exit should save to memory? persist for a given id?
|
|
|
|
- have a removal mechanism
|
|
|
|
- use case is to purge dead mechanism aka no black list
|
|
|
|
- each rm keeps most recent sensor view or reactor view in mem and can accept incoming system viewer connections
|
|
|
|
system viewer clients
|
|
|
|
- spawn 1 per tui client
|
|
|
|
- can do 2 things to any rm
|
|
|
|
a) call it for a complete copy which needs to be fast (gRPC?)
|
|
|
|
b) latch onto update chan to build its own copy for when the client requests the new devices
|
|
|
|
- follow a buffer -> client -> delete life cycle
|
|
|
|
system viewer "server"
|
|
|
|
a) spawn new system veiwer clients and direct them to the proper reactors
|
|
|
|
-aside: can we use this to replace coordinator system? just make a system directory
|
|
|
|
|
|
|
|
what are we already storing?
|
|
|
|
in coordinator we have a mapping of ids to managers for reactor and
|
|
|
|
what if we remap system viewer and coordiantor to system coordinator which does both
|
|
|
|
seems redudent to keep multiple system copies
|
|
|
|
- any speed concerns? Client connections spawning new managers?
|
|
|
|
- we must lock map
|
|
|
|
- channels is probably the answer here, just desync the whole process from itself
|
|
|
|
- listener gets incoming clients
|
|
|
|
- clients get parsed into a "Client" object and sent to system coodiantor
|
|
|
|
- types have slight diffences but essentially:
|
|
|
|
1) see if there is a stored manager or if we need to create one
|
|
|
|
1.5) create manager if it doesnt exits
|
|
|
|
2) start the manager with the client details
|
|
|
|
3) create 2 chans (<-reactor tui<-) for reactor & device info
|
|
|
|
now the divergence
|
|
|
|
Reactor Manager:
|
|
|
|
1) Connect to reactor and send initial ping
|
|
|
|
- if reactor ever doesnt respond (maybe backoff for unavailable) just kill manager and send offline to reactor status chan
|
|
|
|
2) As device info comes in start maintaining a system struct - this must persist exits and starts
|
|
|
|
3) For the sensor info coming in, send a copy on the chan to the void for all youre concerned
|
|
|
|
4) Respond to requests for entire system copies as clients initially connect
|
|
|
|
- probably just a method
|
|
|
|
5) only need to send reactor status on changes aka starts and exits
|
|
|
|
TUI Manager:
|
|
|
|
1) Buffer the currently known reactor status via chan hook
|
|
|
|
2) Wait (Timeout) for connection for tui client
|
|
|
|
- probably sub 5 seconds before we exit
|
|
|
|
- need to keep track via a last seen
|
|
|
|
3) respond with the buffered reactor info
|
|
|
|
4) on request for specific info
|
|
|
|
- request system viewer for the reactor info which will return the reactors sensor chan
|
|
|
|
- spawn goroutine to hook onto this chan and maintain a "local" copy of the new data for the client
|
|
|
|
- can probably make this more efficient but f it
|
|
|
|
- biggest buffer gets is # devs * size dev struct (bytes)
|
|
|
|
- drop anything but most recent
|
|
|
|
5) as client requests for info we either reply with the buffer from the hook or create a new buffer
|
|
|
|
6) translates into pages client side which are more efficent
|
|
|
|
7) could even look at batching these eventually
|
|
|
|
8) should probably kill the listeners (atleas the chan) when the tui client
|
|
|
|
a) disconnects
|
|
|
|
b) goes idle
|
|
|
|
|
|
|
|
System Coordinator must then
|
|
|
|
1) have a method to handle client connections that is concurrent safe
|
|
|
|
2) start client managers on connection
|
|
|
|
3) keep a directory of all the channels for clients for device and reactor info
|
|
|
|
4) link tui client managers to rm properly
|
|
|
|
no need for a name change coordinator will have
|
|
|
|
system viewing functions in systemview.go
|
|
|
|
|
|
|
|
alright check in time
|
|
|
|
|
|
|
|
now have system viewer
|
|
|
|
which embeds a straight up info stream
|
|
|
|
and a map from reactor ids -> Info streams
|
|
|
|
|
|
|
|
InfoStreams are structs with methods for adding listeners and senders
|
|
|
|
both return monitor objects which you can either
|
|
|
|
Send(device info)
|
|
|
|
or GetBuffer() and ClearBuffer([]da)
|
|
|
|
|
|
|
|
this is all the foundation
|
|
|
|
just need to wrap into a thing the coordinator can useor maybe even replace coordinator
|
|
|
|
|
|
|
|
systemviewer has 4 methods
|
|
|
|
every tui manager will embed a reactor listener
|
|
|
|
every reactor manager will embed a reactor sender
|
|
|
|
when a tui client selects a reactor we will embed the device listener
|
|
|
|
every reactor will be its own device sender
|
|
|
|
|
|
|
|
the only thing that happens after init is tui may add device listeners
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
should unify so that its only 1 ping request or a special request when we go to a page for the first time
|
|
|
|
|
|
|
|
ex devinfo
|
|
|
|
|
|
|
|
{
|
|
|
|
Id uint32 // either id or addr
|
|
|
|
Type string //['reactor','device'] set by rlc
|
|
|
|
Status string //set by sm
|
|
|
|
Data string //optional
|
|
|
|
Index //added by monitor for consistent location in tui
|
|
|
|
Transaction ID //added by server and discarded in reply
|
|
|
|
|
|
|
|
I may just scrap this shit in favor of a synced view
|
|
|
|
overhead is probably minimal anyway
|
|
|
|
redo listener bullshit to just route to the cs
|
|
|
|
tui clients will just get a fresh copy of the reactor info and device infofor every request
|
|
|
|
|