-
Notifications
You must be signed in to change notification settings - Fork 7
Configuration Parameters
This section provides more detailed information about how to configure the universAAL middleware.
The universAAL releases have been prepared by modifying Apache Karaf with some extra configuration files. This section describes the configuration files we added for the purpose of universAAL. Similar files can also be found for Pax.
The configuration files are located in subfolder:
- Karaf: etc
- Pax: rundir/confadmin or rundir/etc
The universAAL middleware relies on the following configuration files (universaal-3.2.0-on-karaf is home directory of the universAAL release) :
- Communication connector, it allows nodes to exchange messages over the network. The default communication connector is based on jGroups:
etc/mw.connectors.communication.jgroups.core.cfg
- Discovery connector, it allows nodes to discover and announce a uSpace, the default discovery connector is based on SLP protocol:
etc/mw.connectors.discovery.slp.core.cfg
- uSpace Manager, it controls and manages the creation or join/leave to/from a Space
etc/mw.managers.space.core.cfg etc/mw.managers.aalspace.core.cfg (for version <= 3.4.0)
- Note the following properties:
- spaceConfigurationPath=etc/ default location for all the configurations (aalSpaceConfigurationPath for version <= 3.4.0)
- peerRole=COORDINATOR role played by this instance, it can be PEER or COORDINATOR
- Note the following properties:
- Deploy manager, it controls the installation and un-installation of applications in the Middleware (it is controlled by the uCC component)
etc/mw.managers.deploy.core.cfg
- uSpace module
etc/mw.modules.space.core.cfg etc/mw.modules.aalspace.core.cfg (for version <= 3.4.0)
Moreover, the etc folder contains some directories containing configuration files. In particular
- Static ID of the peer
etc/mw.managers.space.osgi/peer.ids etc/mw.managers.aalspace.osgi/peer.ids (for version <= 3.4.0)
- Remove such file if you want the node to force creating a new random ID
- Configuration for the multi-tenant feature of the universAAL middleware
etc/ri.gateway.multitenant
The universAAL release has been created by also modifying some default Karaf configuration files:
- We added some URLs containing the universAAL features we want to ship in every universAAL release
etc/org.ops4j.pax.url.mvn.cfg
- Note the list of URLs:
org.ops4j.pax.url.mvn.repositories= \ https://repo1.maven.org/maven2, \ http://repository.apache.org/content/groups/snapshots-group@snapshots@noreleases, \ http://svn.apache.org/repos/asf/servicemix/m2-repo, \ http://repository.springsource.com/maven/bundles/release, \ http://repository.springsource.com/maven/bundles/external, \ http://depot.universaal.org/maven-repo/releases, \ http://depot.universaal.org/maven-repo/snapshots@snapshots, \ http://depot.universaal.org/maven-repo/thirdparty
- Karaf, at start up, reads such URLs and download the name of the features available to be installed. Indeed if you type
$ karaf@uAAL>features:list
- You will see a huge list of features
- We added some default properties for Karaf
etc/system.properties
- In particular we added
- the default SLP port used to announce and to discover uSpaces
- In particular we added
net.slp.port=5555
- the default timeout for SLP
net.slp.multicastTimeouts=500,750
- the default configuration for jGroups based on IPv4
java.net.preferIPv4Stack=true
- the default location of the bundle configurations
bundles.configuration.location=etc/
- Optionally, it is possible to add the following properties in order to cipher the reception and send of messages among nodes:
universaal.security.enabled=true bouncycastle.key=support-release/etc
- and copying in etc/ of every node you want to run the same keyfile (e.g. sodapop.key).
bin/karaf.sh [LINUX] bin/karaf.bat [WIN]
- We added some properties to the script files, in particular:
OPTS="-Dkaraf.startLocalConsole=true -Dkaraf.startRemoteShell=true -Djava.net.preferIPv4Stack=true"
- We noticed that by removing the previous line (-Djava.net.preferIPv4Stack=true) the universAAL middleware is not correctly started under Unix systems, we recommend to NOT modify the Karaf start up script. If you find some troubles starting the universAAL mw under Unix, try exporting the following environmental variable before stating the universAAL Middleware:
$ export JAVA_OPTS='-Djava.net.preferIPv4Stack=true -Dnet.slp.port=5555'
You can run the universAAL middleware without the network. In this is the case you will use your loopback network interface.
- First of all you must have all the bundles installed locally (in your .m2 directory). Remember that without Internet, Karaf cannot download any bundle at runtime. You need at least to install with Maven the following universAAL projects
universAAL-middleware/trunk/pom
and type
mvn install [optionally if tests fail, 'mvn -DskipTests install']
- To disable SLP, You need to add a special property before starting Karaf:
export JAVA_OPTS='-Djava.net.preferIPv4Stack=true -Dnet.slp.port=5555 -Dnet.slp.interfaces=127.0.0.1'
Alternatively, and on Windows, append the following property at the end of the configuration file etc/system.properties :
net.slp.interfaces=127.0.0.1
- To disable JGroups broadcasting of messaging to the network you need to add this property:
jgroups.bind_addr=127.0.0.1
- Run the universAAL release
You can run more universAAL instances locally. This section describes:
- Configure and Run 2 universAAL instances with Karaf
- Configure and Run 2 universAAL instances with Pax (Felix@Eclipse)
- Move to your first Karaf installation and copy and paste the universAAL distro, then reaming it accordingly. For example:
uAAL1/ (the first instance) uAAL2/ (the second instance)
- Move to the second instance (uAAL2) and edit the following configuration file:
uAAL2/etc/org.apache.karaf.management.cfg
- In particular edit:
rmiRegistryPort=1099 change to rmiRegistryPort=1111 rmiServerPort=44444 change to rmiServerPort=44441
- Edit the role of the uAAL2 instance, in particular edit the configuration file :
uAAL2/etc/mw.managers.space.core.cfg
- and change:
peerRole=COORDINATOR to peerRole=PEER
- In this way the uAAL2 instance is a PEER and the first instance is a COORDINATOR.
- If you are using Linux, please export this variable from the terminal you are using for run the Karaf instances
export JAVA_OPTS='-Djava.net.preferIPv4Stack=true -Dnet.slp.port=5555'
- Run the first instance (uAAL1) by calling
uAAL1/bin/karaf
- At this point you should see the creation of the channels:
------------------------------------------------------------------- GMS: address=c69ec5c4-6176-4660-b921-2c0d242a5b71, cluster=mw.modules.space.osgi8888, physical address=192.168.1.2:61427 ------------------------------------------------------------------- ------------------------------------------------------------------- GMS: address=c69ec5c4-6176-4660-b921-2c0d242a5b71, cluster=mw.bus.ui.osgi8888, physical address=192.168.1.2:61430 ------------------------------------------------------------------- ------------------------------------------------------------------- GMS: address=c69ec5c4-6176-4660-b921-2c0d242a5b71, cluster=mw.brokers.control.osgi8888, physical address=192.168.1.2:61435 ------------------------------------------------------------------- ------------------------------------------------------------------- GMS: address=c69ec5c4-6176-4660-b921-2c0d242a5b71, cluster=mw.bus.context.osgi8888, physical address=192.168.1.2:61438 ------------------------------------------------------------------- ------------------------------------------------------------------- GMS: address=c69ec5c4-6176-4660-b921-2c0d242a5b71, cluster=mw.bus.service.osgi8888, physical address=192.168.1.2:61443 -------------------------------------------------------------------
- Run the second instance (uAAL2) by calling
uAAL2/bin/karaf
You can type the following universAAL OSGi commands.
- List the uSpaces:
karaf@root> universaal:spaces Found: 1 AAL Spaces ---------------------------------------- * myHome3 - Super Domestic Home - 8888 - 3d0d1d96-c6f9-4808-814d-462527dcd127 - mw.modules.space.osgi - http://aaloa.isti.cnr.it/udp.xml
- In this case there exists just one uSpace called 'Super Domestic Home'
- List the remote or local peers that join to the same uSpace
karaf@root> universaal:peers
karaf@root> universaal:peers Found: 2 Peers ---------------------------------------- * Peer ID: 3d0d1d96-c6f9-4808-814d-462527dcd127 - Peer Role: COORDINATOR Peer ID: dee05f72-864a-4236-9644-4123d6dc0fc6 - Peer Role: PEER
- In this case there are 2 peers: COORDINATOR and PEER. The star * refers to the instance you are using.
universAAL can be run also from Eclipse by means of the universAAL Plugin as reported in the following Figure.
If you want to run 2 universAAL instances from Eclipse you must create two different configuration folders for each of the universAAL instances. Every time you run universAAL from inside Eclipse, universAAL reads all the needed configuration files from the rundir folder inside you Eclipse workspace. For example:
${workspace_loc}/rundir
Since you want to run 2 instances, every universAAL instance must use one rundir folder with its own configuration files. The next configuration files are modified by every universAAL instance:
- mw.managers.space.osgi/peer.ids: this file stores the last ID for the universAAL instance, if such file is duplicated in every universAAL instance, all the instances will share erroneously the same ID
- services/mw.managers.space.core.properties: this file configure the universAAL instance in order to act as COORDINATOR or PEER. You may want to run one instance as COORDINATOR and another one as PEER
Please modify
- VM arguments: change the propery
-Dbundles.configuration.location=${workspace_loc}/rundir2/confadmin
- in order to point to rundir folder different from the first instance
- Working directory path: change the path to e.g.
${workspace_loc}/rundir2/smp.lighting
For creating several instance of universAAL that DO NOT SHARE the same environment you need only to modify the file Home.space by changing the spaceId tag.
Nevertheless, the current communication protocol will send data to all the uSpace that are on the same network, so for isolating each uSpace even at communication level (thus reducing the protocol overhead for discarding unneeded messages) you have to set either the property
jgroups.udp.mcast_port -> Identifies the destination port for multicast packet
or
jgroups.udp.mcast_addr -> Identifies the destination addres for multicast packet
to a custom value.
IMPORTANT: All the nodes that belong to the same uSpace MUST share the same value of both the properties jgroups.udp.mcast_port and jgroups.udp.mcast_addr. For example, on Linux you can start karaf with the following command:
JAVA_OPTS="-Djgroups.udp.mcast_port=4521" bin/karaf
Found a problem?
- Report suggestions, missing, outdated or wrong documentation creating an Issue with "documentation" tag
Support:
Found a problem?- Report suggestions, missing, outdated or wrong documentation creating an Issue with "documentation" tag