feat: docs Sphinx pipeline + lien dashboard header
This commit is contained in:
181
docs/usage.rst
Normal file
181
docs/usage.rst
Normal file
@@ -0,0 +1,181 @@
|
||||
Utilisation
|
||||
===========
|
||||
|
||||
Ingérer une nouvelle acquisition
|
||||
----------------------------------
|
||||
|
||||
1. Connecter le SSD GoPro à z620 (ou s'assurer que les MP4 sont dans ``/mnt/portablessd``).
|
||||
|
||||
2. Lancer l'ingest depuis core :
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
ssh floppyrj45@192.168.0.82
|
||||
cd /home/floppyrj45/docker/cosma-qc
|
||||
python3 scripts/ingest.py --path /mnt/portablessd/AUV009/
|
||||
|
||||
3. Vérifier les jobs créés :
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
python3 -c "
|
||||
import sqlite3
|
||||
conn = sqlite3.connect('cosma-qc.db')
|
||||
for row in conn.execute('SELECT id, auv, gopro, segment, status FROM jobs ORDER BY id'):
|
||||
print(row)
|
||||
conn.close()
|
||||
"
|
||||
|
||||
4. Le dispatcher prend automatiquement en charge les jobs en statut ``pending``.
|
||||
|
||||
|
||||
Surveiller les jobs
|
||||
--------------------
|
||||
|
||||
Dashboard web
|
||||
^^^^^^^^^^^^^
|
||||
|
||||
Accéder au dashboard : http://192.168.0.82:3849
|
||||
|
||||
Il affiche en temps réel :
|
||||
|
||||
- Statut de chaque job (pending / running / done / failed)
|
||||
- Worker assigné
|
||||
- Progression des frames
|
||||
- Liens vers les PLY et GLB
|
||||
|
||||
Logs dispatcher
|
||||
^^^^^^^^^^^^^^^
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
# Logs temps réel
|
||||
sudo journalctl -u cosma-qc-dispatcher -f
|
||||
|
||||
# Logs des 100 dernières lignes
|
||||
sudo journalctl -u cosma-qc-dispatcher -n 100
|
||||
|
||||
# Filtrer erreurs
|
||||
sudo journalctl -u cosma-qc-dispatcher | grep -i error
|
||||
|
||||
Base de données
|
||||
^^^^^^^^^^^^^^^
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
# État global des jobs
|
||||
sqlite3 /home/floppyrj45/docker/cosma-qc/cosma-qc.db \
|
||||
"SELECT id, auv, status, worker, updated_at FROM jobs ORDER BY id;"
|
||||
|
||||
# Jobs en cours
|
||||
sqlite3 cosma-qc.db \
|
||||
"SELECT id, auv, worker FROM jobs WHERE status='running';"
|
||||
|
||||
# Jobs échoués
|
||||
sqlite3 cosma-qc.db \
|
||||
"SELECT id, auv, status FROM jobs WHERE status='failed';"
|
||||
|
||||
|
||||
Visualiser un nuage de points PLY
|
||||
-----------------------------------
|
||||
|
||||
Sur le worker avec viewer Viser (lancé automatiquement pendant reconstruction) :
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
# Naviguer vers (remplacer ID par le numéro de job)
|
||||
http://192.168.0.84:8100 # pour job 0
|
||||
http://192.168.0.84:8101 # pour job 1
|
||||
# etc.
|
||||
|
||||
Avec CloudCompare depuis un PC (si PLY téléchargé) :
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
# Copier le PLY vers PC
|
||||
scp floppyrj45@192.168.0.84:/cosma-qc-frames/job_ID/reconstruction.ply ./
|
||||
# Ouvrir dans CloudCompare
|
||||
|
||||
|
||||
Télécharger un GLB
|
||||
-------------------
|
||||
|
||||
1. Générer le GLB via l'API :
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
curl -X POST http://192.168.0.82:3849/jobs/ID/export_glb
|
||||
|
||||
2. Attendre la fin de la génération (peut prendre quelques minutes selon la taille du PLY).
|
||||
|
||||
3. Lancer le serveur HTTP sur le worker concerné :
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
ssh floppyrj45@192.168.0.84 \
|
||||
"nohup python3 -m http.server 8300 --directory /cosma-qc-frames > /tmp/http8300.log 2>&1 &"
|
||||
|
||||
4. Télécharger :
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
wget http://192.168.0.84:8300/job_ID/reconstruction.glb
|
||||
|
||||
5. Arrêter le serveur HTTP après téléchargement :
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
ssh floppyrj45@192.168.0.84 "pkill -f 'http.server 8300'"
|
||||
|
||||
|
||||
Relancer un job échoué
|
||||
-----------------------
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
# Remettre un job en pending
|
||||
sqlite3 /home/floppyrj45/docker/cosma-qc/cosma-qc.db \
|
||||
"UPDATE jobs SET status='pending', worker=NULL WHERE id=ID;"
|
||||
|
||||
# Supprimer les marqueurs .done pour forcer ré-extraction complète
|
||||
ssh floppyrj45@192.168.0.84 \
|
||||
"rm -f /cosma-qc-frames/job_ID/.video_*.done"
|
||||
|
||||
# Redémarrer le dispatcher si nécessaire
|
||||
sudo systemctl restart cosma-qc-dispatcher
|
||||
|
||||
Le dispatcher reprend le job au prochain cycle.
|
||||
|
||||
|
||||
Redémarrer le pipeline complet
|
||||
--------------------------------
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
# Arrêter
|
||||
sudo systemctl stop cosma-qc-dispatcher
|
||||
docker compose -f /home/floppyrj45/docker/cosma-qc/docker-compose.yml down
|
||||
|
||||
# Démarrer
|
||||
docker compose -f /home/floppyrj45/docker/cosma-qc/docker-compose.yml up -d
|
||||
sudo systemctl start cosma-qc-dispatcher
|
||||
|
||||
|
||||
Vérifications rapides post-mission
|
||||
------------------------------------
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
# 1. Tous les jobs done ?
|
||||
sqlite3 cosma-qc.db "SELECT COUNT(*) FROM jobs WHERE status != 'done';"
|
||||
# Doit retourner 0
|
||||
|
||||
# 2. Tous les PLY présents ?
|
||||
for id in $(sqlite3 cosma-qc.db "SELECT id FROM jobs WHERE status='done'"); do
|
||||
worker=$(sqlite3 cosma-qc.db "SELECT worker FROM jobs WHERE id=$id")
|
||||
ssh floppyrj45@$worker "ls -lh /cosma-qc-frames/job_${id}/reconstruction.ply"
|
||||
done
|
||||
|
||||
# 3. Espace disque OK ?
|
||||
ssh floppyrj45@192.168.0.84 "df -h /cosma-qc-frames"
|
||||
ssh floppyrj45@192.168.0.87 "df -h /cosma-qc-frames"
|
||||
Reference in New Issue
Block a user