From e7cac2b6f20dff2f31f30686728c9bf5d1875e49 Mon Sep 17 00:00:00 2001 From: Denisa Date: Mon, 16 Sep 2024 14:22:04 +0200 Subject: [PATCH 1/3] Fisr fixes Signed-off-by: Denisa --- amlip_docs/rst/getting_started/project_overview.rst | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/amlip_docs/rst/getting_started/project_overview.rst b/amlip_docs/rst/getting_started/project_overview.rst index 41931d36..8e94ab52 100644 --- a/amlip_docs/rst/getting_started/project_overview.rst +++ b/amlip_docs/rst/getting_started/project_overview.rst @@ -8,7 +8,7 @@ Project Overview |eamlip| is a communications framework in charge of data exchange between :term:`AML` nodes through local or remote networks. -It is designed to allow non-experts users to create and manage a cluster of AML nodes to exploit the distributed and concurrent learning capabilities of AML. +It is designed to allow non-expert users to create and manage a cluster of AML nodes to exploit the distributed and concurrent learning capabilities of AML. Thus, AML-IP is a communication framework that abstracts the transport protocols from the user, creating a platform that communicates each node without requiring the user to be concerned about communication issues. It also allows the creation of complex distributed networks with one or multiple users working on the same problem. From 55efd0614c401dc3a9ea632e25f334e02ae63718 Mon Sep 17 00:00:00 2001 From: Denisa Date: Tue, 17 Sep 2024 08:51:16 +0200 Subject: [PATCH 2/3] Fixes documentation Signed-off-by: Denisa --- amlip_docs/index.rst | 25 ++++---- .../rst/demo/collaborative_learning.rst | 7 +- amlip_docs/rst/demo/tensor_inference.rst | 64 +++++++++---------- amlip_docs/rst/demo/workload_distribution.rst | 6 +- .../installation/sources/windows/windows.rst | 2 +- .../rst/getting_started/project_overview.rst | 2 +- amlip_docs/rst/user_manual/nodes/agent.rst | 12 ++-- .../rst/user_manual/nodes/computing.rst | 9 +-- amlip_docs/rst/user_manual/nodes/edge.rst | 13 ++-- .../rst/user_manual/nodes/inference.rst | 15 ++--- amlip_docs/rst/user_manual/nodes/main.rst | 8 +-- .../nodes/model_manager_receiver.rst | 6 +- .../nodes/model_manager_sender.rst | 3 +- amlip_docs/rst/user_manual/nodes/nodes.rst | 2 +- amlip_docs/rst/user_manual/nodes/status.rst | 20 +++--- .../scenarios/collaborative_learning.rst | 4 +- .../user_manual/scenarios/monitor_state.rst | 2 +- .../scenarios/workload_distribution.rst | 2 +- 18 files changed, 101 insertions(+), 101 deletions(-) diff --git a/amlip_docs/index.rst b/amlip_docs/index.rst index e82f17c7..b283417f 100644 --- a/amlip_docs/index.rst +++ b/amlip_docs/index.rst @@ -36,33 +36,30 @@ /rst/getting_started/project_overview - -.. _index_demo: +.. _index_user_manual: .. toctree:: - :caption: Demo Examples + :caption: User Manual :maxdepth: 2 :numbered: 5 :hidden: - /rst/demo/collaborative_learning - /rst/demo/tensor_inference - /rst/demo/rosbot2r_inference - /rst/demo/workload_distribution - + Nodes + Tools + Scenarios -.. _index_user_manual: +.. _index_demo: .. toctree:: - :caption: User Manual + :caption: Demo Examples :maxdepth: 2 :numbered: 5 :hidden: - Scenarios - Nodes - Tools - + /rst/demo/collaborative_learning + /rst/demo/tensor_inference + /rst/demo/rosbot2r_inference + /rst/demo/workload_distribution .. _index_developer_manual: diff --git a/amlip_docs/rst/demo/collaborative_learning.rst b/amlip_docs/rst/demo/collaborative_learning.rst index 24c04428..6422f7fe 100644 --- a/amlip_docs/rst/demo/collaborative_learning.rst +++ b/amlip_docs/rst/demo/collaborative_learning.rst @@ -82,7 +82,7 @@ Let's continue explaining the global variables. :language: python :lines: 24 -``waiter`` is a ``WaitHandler`` that waits on a boolean value. +``waiter`` is a ``WaitHandler`` , which is an object that allows multiple threads wait, until another thread awakes them. In this case, due to being a ``BoolWaitHandler``, it waits on a boolean value. Whenever this value is ``True``, threads awake. Whenever it is ``False``, threads wait. @@ -125,8 +125,7 @@ Model Manager Sender Node ------------------------- This is the Python code for the :ref:`user_manual_nodes_model_sender` application. -It does not use real *AML Models*, but strings. -It does not have a real *AML Engine* but instead the calculation is an *upper-case* conversion of the string received. +It does not use real *AML Models* nor does it have a real *AML Engine* Instead, strings are sent and the calculation is an *upper-case* conversion of the string received. It is implemented in |python| using :code:`amlip_py` API. This code can be found `here `__. @@ -145,7 +144,7 @@ Let's continue explaining the global variables. :language: python :lines: 23 -``waiter`` is a ``WaitHandler`` that waits on a boolean value. +``waiter`` is a ``WaitHandler`` , which is an object that allows multiple threads wait, until another thread awakes them. In this case, due to being a ``BoolWaitHandler``, it waits on a boolean value. Whenever this value is ``True``, threads awake. Whenever it is ``False``, threads wait. diff --git a/amlip_docs/rst/demo/tensor_inference.rst b/amlip_docs/rst/demo/tensor_inference.rst index 66714820..652c36ea 100644 --- a/amlip_docs/rst/demo/tensor_inference.rst +++ b/amlip_docs/rst/demo/tensor_inference.rst @@ -14,9 +14,9 @@ TensorFlow Inference Background ========== -Inference refers to the process of using a trained model to make predictions or draw conclusions based on input data. -It involves applying the learned knowledge and statistical relationships encoded in the model to new, unseen data. -The inference of an image involves passing the image through a trained AI model to obtain a classification based on the learned knowledge and patterns within the model. +Inference is the process of using a trained model to make predictions or draw conclusions from new, unseen data. +It involves applying the learned knowledge and statistical relationships encoded in the model to the input data. +When inferring an image, the image is passed through a trained AI model to classify it based on the patterns and knowledge the model has learned. This demo shows how to implement 2 types of nodes, :ref:`user_manual_nodes_inference` and :ref:`user_manual_nodes_edge`, to perform TensorFlow inference on a given image. With these 2 nodes implemented, the user can deploy as many nodes of each kind as desired and check the behavior of a simulated |amlip| network running. @@ -44,22 +44,16 @@ The demo requires the following tools to be installed in the system: sudo apt install -y swig alsa-utils libopencv-dev pip3 install -U pyttsx3 opencv-python - curl https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -o Miniconda3-latest-Linux-x86_64.sh - bash Miniconda3-latest-Linux-x86_64.sh - # For changes to take effect, close and re-open your current shell. - conda create --name tf python=3.9 - conda install -c conda-forge cudatoolkit=11.8.0 - mkdir -p $CONDA_PREFIX/etc/conda/activate.d - echo 'CUDNN_PATH=$(dirname $(python3 -c "import nvidia.cudnn;print(nvidia.cudnn.__file__)"))' >> $CONDA_PREFIX/etc/conda/activate.d/env_vars.sh - echo 'export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$CONDA_PREFIX/lib/:$CUDNN_PATH/lib' >> $CONDA_PREFIX/etc/conda/activate.d/env_vars.sh - source $CONDA_PREFIX/etc/conda/activate.d/env_vars.sh + python -m venv aml-ip-venv + source aml-ip-venv/bin/activate Ensure that you have TensorFlow and TensorFlow Hub installed in your Python environment before proceeding. -You can install them using pip by executing the following commands: +You can install them by executing the following commands: .. code-block:: bash - pip3 install tensorflow tensorflow-hub tensorflow-object-detection-api nvidia-cudnn-cu11==8.6.0.163 protobuf==3.20.* + python3 -m pip install tensorflow[and-cuda] + pip3 install tensorflow-hub tensorflow-object-detection-api protobuf==3.20 Additionally, it is required to obtain the TensorFlow model from `TensorFlow Hub `_, follow the steps below: @@ -106,7 +100,7 @@ The next block includes the Python header files that allow the use of the AML-IP :lines: 18-19 Let's continue explaining the global variables. -The ``waiter`` allows the node to wait for the inference. +The ``waiter`` object is used to pause the node's execution until the inference result is received. ``DOMAIN_ID`` allows the execution to be isolated because only DomainParticipants with the same Domain Id would be able to communicate to each other. .. literalinclude:: /../amlip_demo_nodes/amlip_tensorflow_inference_demo/amlip_tensorflow_inference_demo/edge_node_async.py @@ -127,9 +121,11 @@ We define the ``main`` function. First, we create an instance of ``AsyncEdgeNode``. The first thing the constructor gets is the given name. -Then a listener, which is an ``InferenceListenerLambda`` object is created with the function ``inference_received`` declared above. -This function is called each we receive an inference. -And also we specified the domain equal to the DOMAIN_ID variable. +Then a `listener `__, +which is an ``InferenceListenerLambda`` object, is created for the function ``inference_received`` declared above. +The listener acts as an asynchronous notification system that allows the entity to notify the application about the Status changes in the entity. +This function is called each time an inference is received. +Lastly, a ``DOMAIN_ID`` is specified, which allows the execution to be isolated. .. literalinclude:: /../amlip_demo_nodes/amlip_tensorflow_inference_demo/amlip_tensorflow_inference_demo/edge_node_async.py :language: python @@ -142,7 +138,7 @@ It converts the size information and the image into bytes and combines the two t :language: python :lines: 51-65 -After that, the ``request_inference`` method is called to request the inference of the image. +Next, the ``request_inference`` method is invoked to send the image for inference. .. literalinclude:: /../amlip_demo_nodes/amlip_tensorflow_inference_demo/amlip_tensorflow_inference_demo/edge_node_async.py :language: python @@ -154,7 +150,7 @@ Finally, the program waits for the inference solution using ``waiter.wait``. :language: python :lines: 74 -Once the solution is received, the execution finish. +Once the solution is received, the execution finishes. Inference Node -------------- @@ -172,14 +168,14 @@ The next block includes the Python header files that allow the use of the AML-IP :lines: 19-20 Let's continue explaining the global variables. -``DOMAIN_ID`` allows the execution to be isolated because only DomainParticipants with the same Domain Id would be able to communicate to each other. -``tolerance`` sets a limit to ignore detections with a probability less than the tolerance. +The ``DOMAIN_ID`` variable allows the execution to be isolated because only DomainParticipants with the same Domain Id would be able to communicate to each other. +The ``tolerance`` variable sets a threshold to filter out detections with a probability lower than the specified tolerance value. .. literalinclude:: /../amlip_demo_nodes/amlip_tensorflow_inference_demo/amlip_tensorflow_inference_demo/inference_node_async.py :language: python :lines: 28-32 -It loads the model from TensorFlow based on the specified path. +The model is loaded from TensorFlow using the specified path. .. literalinclude:: /../amlip_demo_nodes/amlip_tensorflow_inference_demo/amlip_tensorflow_inference_demo/inference_node_async.py :language: python @@ -201,22 +197,22 @@ We define the ``main`` function. We create an instance of ``AsyncInferenceNode``. The first thing the constructor gets is the name ``AMLInferenceNode``. -Then the listener which is an ``InferenceReplierLambda(process_inference)``. -This means calling the ``process_inference`` function to perform the inference requests. -And also we specified the domain equal to the DOMAIN_ID variable. +Then a `listener `__, +which is an ``InferenceReplierLambda`` object, is created for the function ``process_inference`` declared above. +This means that the ``process_inference`` function will be called to handle the inference requests. +Additionally, the domain is specified using the DOMAIN_ID variable. .. literalinclude:: /../amlip_demo_nodes/amlip_tensorflow_inference_demo/amlip_tensorflow_inference_demo/inference_node_async.py :language: python :lines: 84-87 -This starts the inference node. -It will start listening for incoming inference requests and call the ``process_inference`` function to handle them. +This initiates the Inference Node, which will listen for incoming inference requests and invoke the ``process_inference`` function to handle them. .. literalinclude:: /../amlip_demo_nodes/amlip_tensorflow_inference_demo/amlip_tensorflow_inference_demo/inference_node_async.py :language: python :lines: 91 -Finally, waits for a SIGINT signal ``Ctrl+C`` to stop the node and close it. +Finally, the node waits for a SIGINT signal (``Ctrl+C``) to stop and close gracefully. .. literalinclude:: /../amlip_demo_nodes/amlip_tensorflow_inference_demo/amlip_tensorflow_inference_demo/inference_node_async.py :language: python @@ -230,7 +226,7 @@ This demo explains the implemented nodes in `amlip_demo_nodes/amlip_tensorflow_i Run Edge Node ------------- -In the first terminal, run the Edge Node with the following command: +In one terminal, run the Edge Node with the following command: .. code-block:: bash @@ -265,7 +261,7 @@ The expected output is the following: Run Inference Node ------------------ -In the second terminal, run the following command to process the inference: +In a second terminal, run the following command to process the inference: .. code-block:: bash @@ -313,8 +309,8 @@ The execution expects an output similar to the one shown below: what(): SWIG director method error. In method 'process_inference': AttributeError: module 'tensorflow' has no attribute 'gfile' Aborted (core dumped) -Next steps ----------- +Results +------- Based on the information acquired, we have successfully generated the next image: @@ -351,7 +347,7 @@ Check following `issue `_ To update the code, please follow these `steps `_: -1. Locate the file `label_map_util.py`. (default path: ``.local/lib/python3.10/site-packages/object_detection/utils/label_map_util.py``) +1. Locate the file `label_map_util.py`. (default path: ``.local/lib/python3.x/site-packages/object_detection/utils/label_map_util.py``) 2. Navigate to line 132 within the file. 3. Replace `tf.gfile.GFile` with `tf.io.gfile.GFile`. diff --git a/amlip_docs/rst/demo/workload_distribution.rst b/amlip_docs/rst/demo/workload_distribution.rst index ca8b501a..8ce8cf2a 100644 --- a/amlip_docs/rst/demo/workload_distribution.rst +++ b/amlip_docs/rst/demo/workload_distribution.rst @@ -23,7 +23,7 @@ The nodes are implemented in both Python and C++, illustrating how to instantiat The purpose of this demo is to illustrate how a *Main Node* dispatches jobs to a *Computing Node* and how the *Computing Node* processes them. The *Main Node* waits until a *Computing Node* is available to handle the job, while the *Computing Node* awaits a job to solve. -In this demo, the actual :term:`AML` Engine is not provided, and it is mocked. +In this demo, the actual :term:`AML` Engine is simulated using a mock implementation. This *Mock* simulates a difficult calculation by converting a string to uppercase and randomly waiting between 1 and 5 seconds in doing so. @@ -75,8 +75,8 @@ Computing Node -------------- This node simulates a :ref:`user_manual_nodes_computing`. -It does not use real *AML Jobs*, but strings. -It does not have a real *AML Engine* but instead the calculation is an *upper-case* conversion of the string received. +It processes simple string tasks rather than real *AML Jobs*. +Instead of using a real *AML Engine*, it performs a mock computation by converting the received string to uppercase. It is implemented in |cpp| using :code:`amlip_cpp` API. The code can be found `here `__. diff --git a/amlip_docs/rst/developer_manual/installation/sources/windows/windows.rst b/amlip_docs/rst/developer_manual/installation/sources/windows/windows.rst index de4ca2f9..b6cff0b3 100644 --- a/amlip_docs/rst/developer_manual/installation/sources/windows/windows.rst +++ b/amlip_docs/rst/developer_manual/installation/sources/windows/windows.rst @@ -221,7 +221,7 @@ If using colcon_, use the following command to source them: .. code-block:: bash - source /install/setup.bash + source /install/setup.bat Installation methods diff --git a/amlip_docs/rst/getting_started/project_overview.rst b/amlip_docs/rst/getting_started/project_overview.rst index 8e94ab52..7440d973 100644 --- a/amlip_docs/rst/getting_started/project_overview.rst +++ b/amlip_docs/rst/getting_started/project_overview.rst @@ -69,7 +69,7 @@ The API, implementation and testing of this part of the code can be found mainly Python ------ -This is the programming language though to be used by a final user. +This is the programming language thought to be used by a final user. |python| has been chosen as it is easier to work with state-of-the-art :term:`ML` projects. Nodes and classes that the user needs to instantiate in order to implement their own code are parsed from |cpp| by using |swig| tool, giving the user a |python| API. diff --git a/amlip_docs/rst/user_manual/nodes/agent.rst b/amlip_docs/rst/user_manual/nodes/agent.rst index f8983a48..702c9af4 100644 --- a/amlip_docs/rst/user_manual/nodes/agent.rst +++ b/amlip_docs/rst/user_manual/nodes/agent.rst @@ -11,7 +11,7 @@ This tool is developed and maintained by `eProsima` which enables the connection DDS entities such as publishers and subscribers deployed in one geographic location and using a dedicated local network will be able to communicate with other DDS entities deployed in different geographic areas on their own dedicated local networks as if they were all on the same network. This node is in charge of communicating a local node or AML-IP cluster with the rest of the network in WANs. -It centralizes the WAN discovery and communication, i.e. it is the bridge for all the nodes in their LANs with the rest of the AML-IP components. +It serves as the central hub for WAN discovery and communication, acting as a bridge that connects all nodes within their respective LANs to the broader AML-IP network. .. figure:: /rst/figures/agent_nodes.png :align: center @@ -32,7 +32,7 @@ Steps ----- * Create a new :code:`eprosima::ddspipe::participants::types::Address` object with the address port, external address port, :term:`IP` address and transport protocol. -* Instantiate the ``ClientNode`` creating an object of such class with a name, a connection address and a domain. +* Instantiate the ``ClientNode`` creating an object of this class with a name, a connection address and a domain. * Wait until ``Ctrl+C``. .. tabs:: @@ -97,7 +97,7 @@ Steps ----- * Create a new :code:`eprosima::ddspipe::participants::types::Address` object with the address port, external address port, :term:`IP` address and transport protocol. -* Instantiate the ``ServerNode`` creating an object of such class with a name, a listening address and a domain. +* Instantiate the ``ServerNode`` creating an object of this class with a name, a listening address and a domain. * Wait until ``Ctrl+C``. .. tabs:: @@ -114,7 +114,7 @@ Steps eprosima::ddspipe::participants::types::TransportProtocol::udp); // Create Server Node - eprosima::amlip::node::agent::ServerNode Client_node( + eprosima::amlip::node::agent::ServerNode Server_node( "CppServerNode_Manual", { listening_address }, 200); @@ -152,7 +152,7 @@ Steps Repeater Node ************* -A Repeater Node can be used to repeat messages between networks, that is, the message will be forwarded using the same network interface. This is useful to communicate across LANs. +A Repeater Node is utilized to forward messages between different networks, effectively repeating the message using the same network interface. This functionality is particularly useful for facilitating communication across multiple LANs. .. figure:: /rst/figures/agent_nodes_repeater.png :align: center @@ -162,7 +162,7 @@ Steps ----- * Create a new :code:`eprosima::ddspipe::participants::types::Address` object with the address port, external address port, :term:`IP` address and transport protocol. -* Instantiate the ``RepeaterNode`` creating an object of such class with a name, a listening address and a domain. +* Instantiate the ``RepeaterNode`` creating an object of this class with a name, a listening address and a domain. * Wait until ``Ctrl+C``. .. tabs:: diff --git a/amlip_docs/rst/user_manual/nodes/computing.rst b/amlip_docs/rst/user_manual/nodes/computing.rst index 651ca774..6a91dca5 100644 --- a/amlip_docs/rst/user_manual/nodes/computing.rst +++ b/amlip_docs/rst/user_manual/nodes/computing.rst @@ -15,7 +15,7 @@ This node waits for a *Job* serialized as :ref:`user_manual_scenarios_workload_d Synchronous *********** -This node kind does require **active** interaction with the user to perform its action. +This node kind requires **active** interaction with the user to perform its action. This means that once a job is sent, the thread must wait for the solution to arrive before sending another task. User can use method :code:`request_job_solution` to send a new *Job*. The thread calling this method will wait until the whole process has finished and the *Solution* has arrived from @@ -25,7 +25,7 @@ By destroying the node every internal entity is correctly destroyed. Steps ----- -* Instantiate the Computing Node creating an object of such class with a name. +* Instantiate the Computing Node creating an object of this class with a name. * Create a new :code:`JobDataType` from an array of bytes. * Send a new *Job* synchronously and wait for the solution by calling :code:`request_job_solution`. @@ -67,14 +67,15 @@ Steps Asynchronous ************ -User can use method :code:`request_job_solution` to send a new *Job* from :ref:`user_manual_nodes_main` to send new data. +User can use method :code:`request_job_solution` to send a new *Job* from :ref:`user_manual_nodes_main` to send new data. Due to being asynchronous, multiple requests can be sent without waiting for the previous one to finish. +The solution will be sent back to the user through the listener. The thread calling this method will wait until the whole process has finished and the *Solution* has arrived from the *Computing Node* in charge of this *Job*. By destroying the node every internal entity is correctly destroyed. Steps ----- -* Instantiate the Asynchronous Computing Node creating an object of such class with a name, a listener or callback and a domain. +* Instantiate the Asynchronous Computing Node creating an object of this class with a name, a listener or callback and a domain. * Wait for tasks by calling :code:`run`. .. tabs:: diff --git a/amlip_docs/rst/user_manual/nodes/edge.rst b/amlip_docs/rst/user_manual/nodes/edge.rst index 8184a639..bc8bf773 100644 --- a/amlip_docs/rst/user_manual/nodes/edge.rst +++ b/amlip_docs/rst/user_manual/nodes/edge.rst @@ -12,7 +12,7 @@ This node is able to send data serialized as :ref:`user_manual_datatype_inferenc Synchronous *********** -This node kind does require **active** interaction with the user to perform its action. +This node kind requires **active** interaction with the user to perform its action. Once the data is sent, the thread must wait for the inference to arrive before sending another data. Users can use method :code:`request_inference` to send new data. The thread calling this method will wait until the whole process has finished and the *Inference* has arrived from the :ref:`user_manual_nodes_inference` in charge of this data. @@ -21,7 +21,7 @@ By destroying the node every internal entity is correctly destroyed. Steps ----- -* Instantiate the Edge Node creating an object of such class with a name. +* Instantiate the Edge Node creating an object of this class with a name. * Create a new :code:`InferenceDataType` from an array of bytes. * Send a data synchronously and wait for the inference by calling :code:`request_inference`. @@ -59,15 +59,15 @@ Asynchronous ************ Users can use method :code:`request_inference` to send new data. -The thread calling this method must wait until the whole process has finished and the *Inference* has arrived from the :ref:`user_manual_nodes_inference` in charge of this data that will process it by the Listener or callback given, and return the Inference calculated in other thread. +Due to being asynchronous, multiple requests can be sent without waiting for the previous one to finish. The solution will be sent back to the user through the listener. By destroying the node every internal entity is correctly destroyed. Steps ----- -* Instantiate the Asynchronous Edge Node creating an object of such class with a name, a listener or callback and a domain. +* Instantiate the Asynchronous Edge Node creating an object of this class with a name, a listener or callback and a domain. * Create a new :code:`InferenceDataType` from an array of bytes. -* Send a data synchronously calling :code:`request_inference`. +* Send a data asynchronously calling :code:`request_inference`. * Wait for the inference. .. tabs:: @@ -76,6 +76,9 @@ Steps .. code-block:: python + # Inference listener. + # with each Inference message that is received + # from node and must return the solution to the inference. def inference_received( inference, task_id, diff --git a/amlip_docs/rst/user_manual/nodes/inference.rst b/amlip_docs/rst/user_manual/nodes/inference.rst index ce062087..3638e9d3 100644 --- a/amlip_docs/rst/user_manual/nodes/inference.rst +++ b/amlip_docs/rst/user_manual/nodes/inference.rst @@ -6,22 +6,21 @@ Inference Node ############## -This node waits for data serialized as :ref:`user_manual_datatype_inference`, and once received it calculate the inference whose output is the inference solution as :ref:`user_manual_datatype_inference_solution` and send the result back. +This node processes data serialized as :ref:`user_manual_datatype_inference`. Upon receiving the data, it computes the inference and produces an output in the form of :ref:`user_manual_datatype_inference_solution`, which is then sent back to the requester. *********** Synchronous *********** -This node kind does require **active** interaction with the user to perform its action. -This means that calling `process_inference` will wait for receiving data, and will only finish when the result is sent back. -User can use method :code:`request_inference` from :ref:`user_manual_nodes_edge` to send new data. -The thread calling this method will wait until the whole process has finished and the *Inference* has arrived from the *Inference Node* in charge of this data. +This node requires **active** user interaction to perform its tasks. +When calling :code:`process_inference`, the method will block and wait for incoming data, only completing once the result is sent back. Users can utilize the :code:`request_inference` method from :ref:`user_manual_nodes_edge` to submit new data. +The thread invoking this method will remain blocked until the entire process is completed and the *Inference* result is received from the responsible *Inference Node*. By destroying the node every internal entity is correctly destroyed. Steps ----- -* Instantiate the Inference Node creating an object of such class with a name. +* Instantiate the Inference Node creating an object of this class with a name. * Wait for the data by calling :code:`process_inference`. * Return the inference as an :code:`InferenceSolutionDataType`. @@ -63,13 +62,13 @@ Asynchronous ************ User can use method :code:`request_inference` from :ref:`user_manual_nodes_edge` to send new data. -The thread calling this method must wait until the whole process has finished and the *Inference* has arrived from the *Inference Node* in charge of this data that will process it by the Listener or callback given, and return the Inference calculated in other thread. +Due to being asynchronous, multiple requests can be sent without waiting for the previous one to finish. The solution will be sent back to the user through the listener. By destroying the node every internal entity is correctly destroyed. Steps ----- -* Instantiate the Asynchronous Inference Node creating an object of such class with a name, a listener or callback and a domain. +* Instantiate the Asynchronous Inference Node creating an object of this class with a name, a listener or callback and a domain. * Wait for the data by calling :code:`run`. .. tabs:: diff --git a/amlip_docs/rst/user_manual/nodes/main.rst b/amlip_docs/rst/user_manual/nodes/main.rst index 746e492f..cb35ed84 100644 --- a/amlip_docs/rst/user_manual/nodes/main.rst +++ b/amlip_docs/rst/user_manual/nodes/main.rst @@ -25,7 +25,7 @@ By destroying the node every internal entity is correctly destroyed. Steps ----- -* Instantiate the Main Node creating an object of such class with a name. +* Instantiate the Main Node creating an object of this class with a name. * Create a new :code:`JobDataType` from an array of bytes. * Send a new *Job* synchronously and wait for the solution by calling :code:`request_job_solution`. @@ -63,15 +63,15 @@ Asynchronous ************ Users can use method :code:`request_job_solution` to send a new *Job*. -The thread calling this method must wait until the whole process has finished and the *Solution* has arrived from the :ref:`user_manual_nodes_computing` in charge of this data that will process it by the Listener or callback given, and return the Solution calculated in other thread. +Due to being asynchronous, multiple requests can be sent without waiting for the previous one to finish. The solution will be sent back to the user through the listener. By destroying the node every internal entity is correctly destroyed. Steps ----- -* Instantiate the Asynchronous Main Node creating an object of such class with a name, a listener or callback and a domain. +* Instantiate the Asynchronous Main Node creating an object of this class with a name, a listener or callback and a domain. * Create a new :code:`JobDataType` from an array of bytes. -* Send a new *Job* synchronously and wait for the solution by calling :code:`request_job_solution`. +* Send a new *Job* asynchronously and wait for the solution by calling :code:`request_job_solution`. * Wait for the solution. .. tabs:: diff --git a/amlip_docs/rst/user_manual/nodes/model_manager_receiver.rst b/amlip_docs/rst/user_manual/nodes/model_manager_receiver.rst index e0cc84bc..f835134c 100644 --- a/amlip_docs/rst/user_manual/nodes/model_manager_receiver.rst +++ b/amlip_docs/rst/user_manual/nodes/model_manager_receiver.rst @@ -19,7 +19,7 @@ Steps * Create the Id of the node. * Create the data to request. -* Instantiate the ModelManagerReceiver Node creating an object of such class with the Id and data previously created. +* Instantiate the ModelManagerReceiver Node creating an object of this class with the Id and data previously created. * Start the execution of the node. * Wait for statistics. * Request the model. @@ -32,6 +32,8 @@ Steps .. code-block:: cpp + // Include the required headers + #include #include @@ -41,6 +43,7 @@ Steps #include + //Listener that waits for statistics and modes to be received. class CustomModelListener : public eprosima::amlip::node::ModelListener { @@ -124,6 +127,7 @@ Steps # Variable to wait for the statistics waiter = BooleanWaitHandler(True, False) + # Listener that waits for statistics and models to be received class CustomModelListener(ModelListener): def statistics_received( diff --git a/amlip_docs/rst/user_manual/nodes/model_manager_sender.rst b/amlip_docs/rst/user_manual/nodes/model_manager_sender.rst index aea07e75..e8caeee2 100644 --- a/amlip_docs/rst/user_manual/nodes/model_manager_sender.rst +++ b/amlip_docs/rst/user_manual/nodes/model_manager_sender.rst @@ -20,7 +20,7 @@ Steps * Create the Id of the node. * Create the statistics to be sent. -* Instantiate the ModelManagerSender Node creating an object of such class with the Id and statistics previously created. +* Instantiate the ModelManagerSender Node by creating an object of this class with the previously created Id and statistics. * Start the execution of the node. * Wait for a model request to arrive and be answered. * Stop the execution of the node. @@ -32,6 +32,7 @@ Steps .. code-block:: cpp + // Include the required headers #include #include diff --git a/amlip_docs/rst/user_manual/nodes/nodes.rst b/amlip_docs/rst/user_manual/nodes/nodes.rst index 8d26792a..5b8c924c 100644 --- a/amlip_docs/rst/user_manual/nodes/nodes.rst +++ b/amlip_docs/rst/user_manual/nodes/nodes.rst @@ -42,7 +42,7 @@ Node State ---------- The state of a node reflects its current operational status. -Node states can be used to control the actions that the node performs to communicate with other nodes, or to indicate the status of the node to the user. +Node states serve as indicators of the node's current activity and can be utilized to manage interactions with other nodes. They also provide users with insights into the node's operational status. Nodes modify their states using the method :code:`change_status_(const eprosima::amlip::types::StateKind& new_state)`. diff --git a/amlip_docs/rst/user_manual/nodes/status.rst b/amlip_docs/rst/user_manual/nodes/status.rst index 0ddbcc96..1064320b 100644 --- a/amlip_docs/rst/user_manual/nodes/status.rst +++ b/amlip_docs/rst/user_manual/nodes/status.rst @@ -8,25 +8,25 @@ Status Node ########### -This kind of node :term:`Subscribe` to |status| :term:`Topic`. -Thus it receives every |status| data from all the other :term:`Nodes ` in the network. +This type of node :term:`subscribes ` to the |status| :term:`Topic`. +Thus it receives every |status| data from all the other :term:`nodes ` in the network. This node runs a *function* that will be executed with each message received. -This is the main agent of :ref:`user_manual_scenarios_status`. +This node is the primary component of the :ref:`user_manual_scenarios_status` scenario. -Example of Usage +Example Usage ================ -This node kind does require **few interaction** with the user once it is running. -User must start and stop this node as desired using methods :code:`process_status_async` and :code:`stop_processing`. +This type of node requires **minimal interaction** with the user once it is running. +User can start and stop this node as desired using methods :code:`process_status_async` and :code:`stop_processing`. Also, user must yield a callback (function) that will be executed with every |status| message received. -By destroying the node it stops if running, and every internal entity is correctly destroyed. +When the node is destroyed, it automatically stops any ongoing processes, ensuring that all internal entities are properly cleaned up. Steps ----- -* Instantiate the Status Node creating an object of such class with a name. -* Start processing status data of the network calling :code:`process_status_async`. -* Stop processing data calling :code:`stop_processing`. +* Instantiate the Status Node creating an object of this class with a name. +* Start processing the status data of the network by calling :code:`process_status_async`. +* Stop processing data by calling :code:`stop_processing`. .. tabs:: diff --git a/amlip_docs/rst/user_manual/scenarios/collaborative_learning.rst b/amlip_docs/rst/user_manual/scenarios/collaborative_learning.rst index 86923cf1..f47f8a11 100644 --- a/amlip_docs/rst/user_manual/scenarios/collaborative_learning.rst +++ b/amlip_docs/rst/user_manual/scenarios/collaborative_learning.rst @@ -38,7 +38,7 @@ Model Reply Data Type ===================== The **Model Reply** Data Type represents a problem reply with the requested model. -The *replies* sent from a *Model Manager Sender Node* to a *Model Manager Receiver Node* are treated as a bytes array of arbitrary size. +The *replies* sent from a *Model Manager Sender Node* to a *Model Manager Receiver Node* are treated as a byte array of arbitrary size. So far, the interaction with this class could be done from a :code:`void*`, a byte array or a string. .. note:: @@ -53,7 +53,7 @@ Model Statistics Data Type ========================== The **Statistics** Data Type represents the statistics of models, such as their number of parameters or the datasets they were trained on. -The *messages* sent from a *Model Manager Sender Node* to a *Model Manager Receiver Node* are treated as a bytes array of arbitrary size. +The *messages* sent from a *Model Manager Sender Node* to a *Model Manager Receiver Node* are treated as a byte array of arbitrary size. So far, the interaction with this class could be done from a :code:`void*`, a byte array or a string. .. note:: diff --git a/amlip_docs/rst/user_manual/scenarios/monitor_state.rst b/amlip_docs/rst/user_manual/scenarios/monitor_state.rst index 5044bf7b..d803b270 100644 --- a/amlip_docs/rst/user_manual/scenarios/monitor_state.rst +++ b/amlip_docs/rst/user_manual/scenarios/monitor_state.rst @@ -9,7 +9,7 @@ Monitor Network State Scenario ############################## This :term:`Scenario` performs the monitoring action: knowing, analyzing and debugging an |aml| network. -Each of the |amlip| :term:`Nodes ` :term:`Publish` their current |status| information and update it along their lifetimes. +Each of the |amlip| :term:`nodes ` :term:`publishes ` its current |status| information and updates it along its lifetime. This scenario supports :term:`subscription ` to this :term:`Topic` in order to receive such status information, that can be processed, stored, read, etc. .. figure:: /rst/figures/scenarios/status_scenario.png diff --git a/amlip_docs/rst/user_manual/scenarios/workload_distribution.rst b/amlip_docs/rst/user_manual/scenarios/workload_distribution.rst index ae42a087..09c177da 100644 --- a/amlip_docs/rst/user_manual/scenarios/workload_distribution.rst +++ b/amlip_docs/rst/user_manual/scenarios/workload_distribution.rst @@ -42,7 +42,7 @@ Job Solution Data Type ====================== The **Solution** Data Type represents an *Atomization* or new model state. -The **Solution** sent from a *Computing Node* to a *Main Node* is treated as a bytes array of arbitrary size. +The **Solution** sent from a *Computing Node* to a *Main Node* is treated as a byte array of arbitrary size. So far, the interaction with this class could be done from a :code:`void*`, a byte array or a string. From Python API, the only way to interact with it is by `str` type. From 457c7aa34812679cb3984f031e7dbc053d626380 Mon Sep 17 00:00:00 2001 From: Denisa Date: Fri, 25 Oct 2024 10:57:50 +0200 Subject: [PATCH 3/3] [21989]Update AML-IP documentation to include Installation with Docker in more details Signed-off-by: Denisa --- amlip_docs/rst/installation/docker.rst | 49 +++++++++++++++++++++++++- 1 file changed, 48 insertions(+), 1 deletion(-) diff --git a/amlip_docs/rst/installation/docker.rst b/amlip_docs/rst/installation/docker.rst index 753825b7..fe764a6a 100644 --- a/amlip_docs/rst/installation/docker.rst +++ b/amlip_docs/rst/installation/docker.rst @@ -8,4 +8,51 @@ Docker image ############ A pre-compiled image of the |amlip| is not available at this stage. -However, please find instructions on how to create your own Docker image `here `__. +However, there is a Dockerfile available to create your own Docker image `here `__. + +This image is installed with an AML-IP that is able to run demo nodes. + +Getting the Dockerfile +======================= + +Download the AML-IP repository. + +.. code-block:: bash + + git clone https://github.com/eProsima/AML-IP.git + +Building the Docker image +========================= + +Navigate to the Docker directory. + +.. code-block:: bash + + cd amnlip/docker + +Build the Docker image. + +.. code-block:: bash + + docker build -t amlip --no-cache -f Dockerfile . + +Using the Docker image +====================== + +Instructions to run an already built Docker image +------------------------------------------------- + +.. code-block:: bash + + # Run docker image + docker run --rm -it --net=host --ipc=host amlip + +Instructions on how to build it +------------------------------- + +.. code-block:: bash + + # Build docker image (from workspace where Dockerfile is) + docker build --rm -t amlip -f Dockerfile . + # use --no-cache argument to restart build +