Conversation
Adds configuration files for various 5GC network functions (AMF, AUSF, CHF, gNB, N3IWF, N3UE, NEF, NRF, NSSF, PCF, SMF, TNGF, UDM, UDR, WebUI, and UPF) to facilitate integration tests. This introduces default configurations and network topologies, which improve the testing infrastructure. It defines network function parameters, security settings, supported slices, and routing information.
Introduces Dockerfiles for base builder and executor images to streamline the build process. Creates Dockerfile for gnbsim based on the provided context, and orchestrates the build process of omnipath and gnbsim components using docker-bake.hcl. The changes facilitate containerization of the application and its dependencies, improving portability and reproducibility.
Adds a comprehensive integration test suite with a Docker Compose setup for free5gc components and integration with an OMEC gnbsim instance. Includes TLS certificates for secure communication between network functions, configures gnbsim with the OMEC project for gNB simulation, and sets up an Omnipath AMF configuration to enable 5G core network functionalities. Also includes user initialization scripts, and a network capture utility to analyze network traffic and enable end-to-end testing.
Introduces a Python script to capture and analyze network traffic, focusing on 5G core network protocols. The script leverages `tshark` for packet capture and `scapy` for packet analysis. It extracts payloads from TCP, UDP, and SCTP packets, reconstructs HTTP/1 and HTTP/2 messages, and logs the data. It processes pcap files generated by tshark, adjusts packet timestamps for accurate analysis, and handles TCP stream reassembly for HTTP/2. The script filters packets based on IP addresses and protocols of interest to reduce processing overhead.
Implements RAN connection closing functionality. The changes ensure that when a RAN (gNB) connection is closed, the associated UE contexts are properly notified and cleaned up. This includes closing the TNLA association and removing the gNB context from the list of active connections.
Allows the application to resolve hostnames to IP addresses from configuration files. This change replaces direct IP address configuration with a resolver capable of parsing either an IP address string or a domain name, thus improving the flexibility and adaptability of network configurations.
Migrates the packet capture process to be managed by Supervisord. This change provides better process management for the capturer service, ensuring it restarts automatically if it crashes and simplifies the overall service configuration. It also adds a healthcheck that verifies that tshark is dumping pcap files. The Dockerfile is updated to install Supervisord, Tshark and its dependencies and configure it with a configuration file that manages the tshark process.
Adds patterns to ignore pcaps and certificates generated during integration tests. Also standardizes the comments style in the gitignore file.
Introduces a new package for decoding HTTP/2 traffic from PCAP files. This package includes classes for decoding HTTP/2 frames, managing connection state, and representing HTTP/2 streams.
Implements connection management and HPACK decompression functionalities for HTTP/2 traffic analysis. This includes managing connection state, handling HPACK contexts for both client and server directions, and providing header decoding.
Introduces the HTTP2Stream class to manage the state of HTTP/2 streams, including headers, data, and trailers for both client and server sides. Facilitates the tracking of stream completeness and provides methods for extracting request method, path, and response status.
Adds a class for decoding HTTP/2 traffic from PCAP files. This class manages TCP stream reassembly and HTTP/2 frame parsing, allowing for extraction of request/response data, headers, and settings. It handles connection-level and stream-specific frames, including SETTINGS, PING, GOAWAY, HEADERS, DATA, and PUSH_PROMISE. The decoder also attempts to recover from parsing errors by locating the next potential frame header.
Adds logging utilities to aid in debugging and tracking packet processing within the h2_decoder component. This allows for better visibility into the decoder's operation during integration testing.
Implements HTTP/2 reassembly and processing using the `h2_decoder` library. This change introduces the capability to capture and reconstruct HTTP/2 traffic from TCP streams. The code now buffers TCP payloads, parses HTTP/2 frames using the `h2_decoder`, and extracts relevant information such as headers, data, and stream IDs. The reconstructed payloads are then stored. Includes improved logging and error handling.
Implements MongoDB integration to store reconstructed packet payloads for further analysis. This allows for persistent storage and retrieval of processed network traffic data, facilitating more in-depth investigations and reporting. A new MongoDB collection named `packet_analysis` stores payload data with a timestamp. The MONGO_URI environment variable configures the MongoDB connection.
|
Note Other AI code review bot(s) detectedCodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review. Caution Review failedThe pull request is closed. Tip 👮 Agentic pre-merge checks are now available in preview!Pro plan users can now enable pre-merge checks in their settings to enforce checklists before merging PRs. 📜 Recent review detailsConfiguration used: CodeRabbit UI Review profile: CHILL Plan: Pro 📒 Files selected for processing (6)
✨ Finishing Touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 43
♻️ Duplicate comments (10)
tests/integration/cert/n3iwf.key (1)
1-27: Duplicate key issueAs mentioned in the review of
tngf.key, this key is identical to other test keys.tests/integration/cert/chf.key (1)
1-27: Same critical issue as noted forroot.key: a plain private key is committed. Follow the prescribed remediation steps.tests/integration/cert/smf.key (1)
1-27: Same critical issue as noted forroot.key: a plain private key is committed. Follow the prescribed remediation steps.tests/integration/cert/nef.key (1)
1-27: Same critical issue as noted forroot.key: a plain private key is committed. Follow the prescribed remediation steps.tests/integration/cert/ausf.key (1)
1-27: Same critical issue as noted forroot.key: a plain private key is committed. Follow the prescribed remediation steps.tests/integration/cert/amf.key (1)
1-27: Duplicate: private key checked inSame critical concern as noted for
udm.key; remove and rotate.tests/integration/cert/nrf.key (1)
1-27: Duplicate: private key checked inSame critical concern as noted for
udm.key; remove and rotate.tests/integration/cert/udr.key (1)
1-27: Duplicate: private key checked inSame critical concern as noted for
udm.key; remove and rotate.tests/integration/config/omnipathcfg.yaml (1)
6-12: Same HTTP/TLS mismatch as SMF config.
scheme: http+ TLS paths is contradictory. Align as in previous comment to avoid mis-configured AMF.tests/integration/config/smfcfg.yaml (1)
8-15: HTTP/TLS inconsistency repeats here.Apply the same fix: move to
httpsor drop the key/pem block.
🧹 Nitpick comments (47)
lightning-nf/omnipath/app/src/lib.rs (1)
129-131: Logging message could mislead & lacks context
info!("Amf Started");fires before configuration is validated or NGAP listener bound. If initialization fails later the log is still emitted, giving a false sense of success and complicating troubleshooting.Prefer a more specific, staged log (or move this after all fallible steps):
- info!("Amf Started"); + info!(nf_type=?SOURCE_TYPE, "AMF initialization begun");Also ensure the global subscriber is set before this call (otherwise the message is silently dropped).
lightning-nf/omnipath/app/src/config/mod.rs (2)
240-262: Consider the implications of DNS resolution during deserializationThe DNS resolution happens at configuration load time, which means:
- Configuration loading could be delayed if DNS is slow or unavailable
- The resolved IP is cached for the lifetime of the configuration
- Only the first resolved IP is used, which might not be ideal for load balancing
Consider documenting these behaviors or implementing lazy resolution if needed.
254-261: Document the first-IP selection behaviorThe code takes the first resolved IP address when multiple are available. This could impact reliability in multi-homed environments.
Consider adding a comment to clarify this behavior:
// If direct parsing fails, try DNS lookup using std::net let addresses: Vec<std::net::SocketAddr> = (s, 80).to_socket_addrs()?.collect(); + // Take the first resolved address. In multi-homed environments, this might not be optimal. if let Some(socket_addr) = addresses.into_iter().next() {tests/integration/sniffer/h2_decoder/utils.py (1)
3-6: Harden logging setup & prevent handler duplicationNo log level or formatter is set and the handler is re-added on every import, producing duplicate output. Guard the handler addition and configure sensible defaults.
- _handler = logging.StreamHandler() - logger = logging.getLogger("h2_decoder") - logger.propagate = False - logger.addHandler(_handler) +logger = logging.getLogger("h2_decoder") +if not logger.handlers: # avoid duplicates + _handler = logging.StreamHandler() + _handler.setLevel(logging.DEBUG) + _handler.setFormatter( + logging.Formatter("%(asctime)s %(levelname)s %(name)s: %(message)s") + ) + logger.addHandler(_handler) +logger.setLevel(logging.DEBUG) +logger.propagate = Falsetests/integration/user-init/Dockerfile (1)
1-7: Harden the image & pin Python package versionsCurrent image builds as root, installs unpinned dependencies, and lacks an entrypoint.
Consider:-FROM python:3.11-slim +FROM python:3.11.5-slim - RUN pip install --no-cache-dir requests pyyaml +COPY requirements.txt . +RUN pip install --no-cache-dir -r requirements.txt \ + && adduser --disabled-password --uid 1001 appuser \ + && chown -R appuser /app +USER appuser -COPY create-user.py /app/create-user.py +COPY create-user.py . + +ENTRYPOINT ["python", "/app/create-user.py"]Benefits: deterministic builds, smaller diff for updates, runs as non-root.
tests/integration/README.md (1)
38-41: Format the RAN simulator section as a table for consistencyCurrent free-form lines break Markdown rendering.
-# Gnb Simulator -Ran1 = Ip Address: 10.0.0.101 -Ran2 = Ip Address: 10.0.0.102 -Ran3 = Ip Address: 10.0.1.103 +# gNB Simulators +| Name | IP Address | +|------|------------| +| Ran1 | 10.0.0.101 | +| Ran2 | 10.0.0.102 | +| Ran3 | 10.0.1.103 |tests/integration/config/n3iwf-ipsec.sh (2)
1-1: Add strict-mode guards for safer executionEnable
set -euo pipefailright after the she-bang so the script aborts on the first failure, catches unset vars, and propagates pipe errors.
This avoids half-configured IPSec interfaces when a command fails.-#!/bin/sh +#!/bin/sh +set -euo pipefail
13-13: Quote the command substitution to satisfy ShellCheck SC2046Without quotes,
hostname -imay return multiple addresses causing word-splitting and an invalidip linkinvocation.-ip link add name ipsec0 type vti local $(hostname -i | awk '{print $1}') remote 0.0.0.0 key 5 +ip link add name ipsec0 type vti local "$(hostname -i | awk '{print $1}')" remote 0.0.0.0 key 5tests/integration/config/uecfg-ulcl.yaml (1)
21-25: Hostname ingnbSearchListmay not resolve inside the simulators.
gnbsimperforms a directinet_aton()on each entry; host-name resolution is skipped. Unless/etc/hostsis pre-populated inside the container,gnb.unifyair.comwill be ignored and only127.0.0.1will be tried.
Consider replacing with the actual IP or mounting an/etc/hostssnippet.tests/integration/config/gnbcfg.yaml (1)
8-11: Mixing loopback (127.0.0.1) with FQDNs for different interfaces can confuse SCTP/GTP routing.Unless the FQDN resolves to the same loopback interface, the gNB will advertise multiple IPs that belong to different namespaces, leading to asymmetric traffic during integration tests.
Align all three addresses to either the loopback or the FQDN to avoid subtle packet-capture headaches.tests/integration/config/ULCL/upfcfg-psa-upf.yaml (1)
8-10:retransTimeoutis parsed as duration – quote it explicitly to keep YAML happy.Some YAML parsers treat bare
1sas a string, others attempt to coerce to a boolean(!). Safer:- retransTimeout: 1s + retransTimeout: "1s"tests/integration/config/uecfg.yaml (1)
40-63: Inconsistent number formats in NSSAI configuration.The slice identifiers use mixed formats: hex format in
sessionsandconfigured-nssai(e.g.,0x01,0x010203) but decimal format indefault-nssai(e.g.,1). Consider using consistent formatting throughout for clarity.Apply this fix for consistency:
# Default Configured NSSAI for this UE default-nssai: - - sst: 1 - sd: 1 + - sst: 0x01 + sd: 0x000001tests/integration/config/ausfcfg.yaml (1)
7-7: Consider using HTTPS for production security.The configuration uses HTTP for the SBI scheme, which may be acceptable for integration testing but should be HTTPS in production environments to ensure secure communication between network functions.
tests/integration/config/pcfcfg.yaml (1)
8-8: Consider HTTPS for production environments.Similar to the AUSF config, this uses HTTP which may be acceptable for integration testing but should be HTTPS in production.
tests/integration/sniffer/Dockerfile (3)
3-3: Consider using a more stable Ubuntu LTS version.Ubuntu 25.10 is a very recent release that may have stability issues. Consider using Ubuntu 22.04 LTS or 24.04 LTS for better stability and long-term support in integration testing.
31-31: Optimize package installation and cleanup.The Dockerfile has two separate
apt-get updatecalls which is inefficient and misses cleanup after the second installation. Consider consolidating package installations for better caching and consistent cleanup.-RUN apt-get update && apt-get install -y libc-ares-dev libspeexdsp-dev +# Move these packages to the main installation block above +RUN apt-get update && apt-get install -y \ + libpcap-dev git curl iproute2 supervisor \ + build-essential cmake \ + libsystemd-dev \ + libssh-dev \ + libglib2.0-dev \ + qtbase5-dev \ + qttools5-dev \ + qtmultimedia5-dev \ + libqt5svg5-dev \ + libgcrypt20-dev \ + flex \ + bison \ + wget \ + byacc \ + libc-ares-dev \ + libspeexdsp-dev \ + python3 \ + python3-pip \ + python3.13-venv \ + && rm -rf /var/lib/apt/lists/*Also applies to: 40-40
28-39: Consider verifying Wireshark download integrity.While compiling from source provides version control, consider adding checksum verification for the downloaded tarball to ensure integrity and security.
Example addition:
RUN wget https://www.wireshark.org/download/src/wireshark-4.4.7.tar.xz \ + && echo "expected_sha256_hash wireshark-4.4.7.tar.xz" | sha256sum -c \ && tar -xf wireshark-4.4.7.tar.xztests/integration/config/chfcfg.yaml (1)
17-19: Copy-paste artefact in commentThe inline comment still says “Nchf_AMPolicyControl” while the list correctly contains
nchf-convergedcharging. Fix the comment to avoid misleading operators skimming the file.tests/integration/sniffer/h2_decoder/__init__.py (2)
1-14: Docstring references non-exported helper
debug_frame_parsingis mentioned but not imported nor included in__all__. Either remove the mention or expose the helper to avoid dead documentation.
23-27: Consider surfacing public helpers in__all__If
debug_frame_parsing(or other utilities) are meant for external use during tests, append them to__all__; otherwise keep them private and drop them from the docstring.
Example:__all__ = [ 'HTTP2Decoder', 'HTTP2Connection', 'HTTP2Stream', + 'debug_frame_parsing', ]tests/integration/config/n3iwfcfg.yaml (2)
46-50: Private key referenced – double-check repository exposure
privateKeypoints tocert/n3iwf.key. Ensure the actual key file is either:
- committed only for disposable, non-production use; or
- excluded from VCS and mounted at runtime (recommended).
Accidental leakage of long-term keys is a common compliance finding.
57-60: Missing trailing newlineYAML-lint flags the lack of a final newline. Add one to keep tooling quiet.
reportCaller: false # enable the caller report or not, value: true or false +tests/integration/config/udmcfg.yaml (1)
34-36: Missing trailing newlineSame YAML-lint issue as above—add a newline at EOF for consistency.
tests/integration/config/n3uecfg.yaml (1)
32-38: Embedded subscriber secrets
K,OP,OPC, etc. are included inline. If these are test-only and rotate regularly that’s fine; otherwise consider supplying them via secrets management to avoid leaking production credentials.tests/integration/config/tngfcfg.yaml (3)
6-6: Fix YAML formatting: excessive spacing after colon.The static analysis tool detected excessive spacing after the colon on line 6.
- TNGFInformation: + TNGFInformation:
19-19: Remove trailing whitespace.Line 19 contains trailing spaces that should be removed for cleaner formatting.
- TAISliceSupportList: # Network Slice supported in this TAI + TAISliceSupportList: # Network Slice supported in this TAI
60-61: Remove excessive blank lines at end of file.The file has more blank lines than necessary at the end.
reportCaller: false # enable the caller report or not, value: true or false -tests/integration/docker-compose.yaml (1)
352-361: Fix YAML formatting issues in network configuration.Static analysis detected formatting issues that should be addressed.
- subnet: 10.0.0.0/16 -tests/integration/config/gnbsimcfg.yaml (1)
45-45: Remove trailing whitespace.Line 45 contains trailing spaces that should be removed.
- sd: 112233 + sd: 112233tests/integration/sniffer/h2_decoder/stream.py (1)
3-3: Remove unused import.The logger import is not used in this file and should be removed to clean up the code.
-from .utils import loggerdeploy/omnipath/Dockerfile (1)
21-27: Image ships as root with no entrypoint.For production-grade images expose a non-root user and default command:
FROM executor-base AS executor @@ -RUN mkdir -p /unifyair/config -COPY --from=builder /unifyair/target/release/lightning-cli /unifyair/lightning-cli +RUN addgroup -S app && adduser -S app -G app \ + && mkdir -p /unifyair/config +COPY --from=builder /unifyair/target/release/lightning-cli /usr/local/bin/lightning-cli +USER app +ENTRYPOINT ["lightning-cli"]deploy/gnbsim/Dockerfile (1)
31-38: Optional debug layer drags in 50 MB every build.Cache busts when
DEBUG_TOOLStoggles. Move into a separate stage or use multi-arch--target.Consider:
FROM gnbsim AS gnbsim-debug ARG DEBUG_TOOLS=true ...tests/integration/config/omnipathcfg.yaml (1)
58-59: Trailing whitespace trips linters – clean up.YAMLlint already reported this:
58 full: unifyair␠ 59 short: ua␠Remove the extra spaces.
tests/integration/config/smfcfg.yaml (1)
45-78: UPF FQDN re-used for N3 & N4 – may violate 3GPP separation.N3 (GTP-U) should resolve to the user-plane datapath IP, while N4 (PFCP) typically lives on a control plane interface.
Using the same hostupf.unifyair.comfor both can cause routing ambiguities in multi-NIC deployments.Check whether separate A/AAAA records or explicit IPs are required.
tests/integration/config/nssfcfg.yaml (1)
39-39: Consider adding explicit slice differentiator values.Several S-NSSAI entries are missing the
sd(slice differentiator) field. While this might be intentional for default slices, consider explicitly definingsd: 000000for clarity and consistency with 3GPP specifications.For example:
- sst: 1 # Slice/Service Type (uinteger, range: 0~255) + sd: 000000 # Slice Differentiator (3 bytes hex string, range: 000000~FFFFFF)Also applies to: 69-69, 123-123, 139-139, 152-152, 166-166, 176-176, 189-189, 201-201, 216-216, 228-228, 242-242, 255-255, 268-268, 288-288
tests/integration/sniffer/supervisord.conf (1)
28-29: Fix inconsistent log size configuration syntax.There's inconsistency in the log size configuration syntax. Some use
stdout_logfile_maxbytes = 0(with spaces around =) while others usestdout_maxbytes=0(without spaces).-stdout_logfile_maxbytes = 0 -stderr_logfile_maxbytes = 0 +stdout_logfile_maxbytes=0 +stderr_logfile_maxbytes=0Also applies to: 43-44
tests/integration/config/userinitcfg.yaml (2)
173-173: Fix YAML formatting issues.The file is missing a newline at the end and has trailing spaces, which violates YAML formatting standards.
- qosRef: 2 + qosRef: 2 +
56-62: Validate QoS profile configuration consistency.The 5G QoS profiles have some inconsistencies:
preemptCapandpreemptVulnare empty strings instead of boolean valuespriorityLevelis duplicated (once inarpand once at the profile level)arp: priorityLevel: 8 - preemptCap: '' - preemptVuln: '' - priorityLevel: 8 + preemptCap: NOT_PREEMPT + preemptVuln: NOT_PREEMPTABLEAlso applies to: 82-88
docker-bake.hcl (1)
9-9: Consider consistent tag versioning strategy.The tag naming patterns are inconsistent:
unifyair/builder-base:latestandunifyair/executor-base:latestuse:latestunifyair/omnipathhas no version tagomecproject/5gc-gnbsim:rel-1.6.3uses a specific versionConsider using a consistent versioning approach:
- tags = ["unifyair/omnipath"] + tags = ["unifyair/omnipath:latest"]Or parameterize versions using variables for better maintainability.
Also applies to: 15-15, 24-24, 34-34
tests/integration/user-init/create-user.py (2)
58-58: Improve token extraction robustness.The token extraction logic assumes specific response field names but may not handle all API response variations.
- token = data.get('token') or data.get('access_token') + # Try multiple possible token field names + token = data.get('token') or data.get('access_token') or data.get('jwt') or data.get('authToken')Consider making the token field name configurable in the YAML config.
123-149: Consider refactoring main function for better testability.The main function handles multiple responsibilities (argument parsing, config loading, authentication, user creation). Consider separating these concerns for better testability and maintainability.
def main(): args = parse_arguments() config = load_config(args.config) validate_config(config) token = authenticate(config) create_all_subscribers(token, config) def parse_arguments(): # Move argparse logic here def validate_config(config): # Add comprehensive config validation def authenticate(config): # Move authentication logic here def create_all_subscribers(token, config): # Move subscriber creation logic heretests/integration/sniffer/h2_decoder/decoder.py (3)
2-2: Remove unused importsThe static analysis correctly identifies unused imports that should be removed.
-from collections import defaultdict, OrderedDict +from collections import defaultdictNote: If
defaultdictis also not used in the code, it should be removed as well.
316-316: Remove unused variable assignmentThe variable
server_complete_before_current_frameis assigned but never used.-# Store the server_complete state *before* processing the current frame -server_complete_before_current_frame = stream.server_complete
203-213: Consider more specific exception handlingUsing a broad
except Exceptioncould hide programming errors. Consider catching specific exceptions that are expected during frame parsing.-except Exception as e: +except (struct.error, IndexError, ValueError) as e:This would catch the most likely parsing errors while allowing unexpected exceptions to propagate for debugging.
tests/integration/sniffer/capturer.py (3)
89-101: Simplify exception handling with contextlib.suppressUse
contextlib.suppressfor cleaner code as suggested by static analysis.+from contextlib import suppress from pymongo import MongoClient from pymongo.errors import CollectionInvalid global mongo_collection mongo_client = MongoClient(MONGO_URI) mongo_db = mongo_client["integration-tests"] # Ensure collection exists -try: - mongo_db.create_collection("packet_analysis") -except CollectionInvalid: - pass # Collection already exists +with suppress(CollectionInvalid): + mongo_db.create_collection("packet_analysis") mongo_collection = mongo_db["packet_analysis"]
178-210: Remove unused exception variableThe exception variable
eis not used in the exception handler.- except Exception as e: + except Exception: log_lines.append( f" Payload (hex): {payload.hex()[:200]}{'...' if len(payload) > 100 else ''}" )
557-567: Remove unnecessary f-string prefixThe f-string on line 559 has no placeholders.
def log_l4_packet(proto, src_ip, dst_ip, src_port, dst_port, payload): log_lines = [ - f"----- L4+ PACKET -----", + "----- L4+ PACKET -----", f" Protocol: {proto}",
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (15)
tests/integration/cert/af.pemis excluded by!**/*.pemtests/integration/cert/af_cafa1ddd-6b4f-4604-94a7-89780b41cc03.pemis excluded by!**/*.pemtests/integration/cert/amf.pemis excluded by!**/*.pemtests/integration/cert/ausf.pemis excluded by!**/*.pemtests/integration/cert/chf.pemis excluded by!**/*.pemtests/integration/cert/n3iwf.pemis excluded by!**/*.pemtests/integration/cert/nef.pemis excluded by!**/*.pemtests/integration/cert/nrf.pemis excluded by!**/*.pemtests/integration/cert/nssf.pemis excluded by!**/*.pemtests/integration/cert/pcf.pemis excluded by!**/*.pemtests/integration/cert/root.pemis excluded by!**/*.pemtests/integration/cert/smf.pemis excluded by!**/*.pemtests/integration/cert/tngf.pemis excluded by!**/*.pemtests/integration/cert/udm.pemis excluded by!**/*.pemtests/integration/cert/udr.pemis excluded by!**/*.pem
📒 Files selected for processing (72)
.dockerignore(1 hunks).gitignore(3 hunks)Cargo.toml(1 hunks)config/amfcfg.yaml(1 hunks)deploy/base/Dockerfile.builder(1 hunks)deploy/base/Dockerfile.executor(1 hunks)deploy/gnbsim/Dockerfile(1 hunks)deploy/omnipath/Dockerfile(1 hunks)docker-bake.hcl(1 hunks)lightning-cli/src/nf_type.rs(5 hunks)lightning-nf/omnipath/app/src/builder/sbi/nrf.rs(1 hunks)lightning-nf/omnipath/app/src/config/mod.rs(6 hunks)lightning-nf/omnipath/app/src/context/app_context.rs(2 hunks)lightning-nf/omnipath/app/src/lib.rs(1 hunks)lightning-nf/omnipath/app/src/ngap/engine/controller.rs(3 hunks)tests/integration/Cargo.toml(1 hunks)tests/integration/README.md(1 hunks)tests/integration/cert/amf.key(1 hunks)tests/integration/cert/ausf.key(1 hunks)tests/integration/cert/chf.key(1 hunks)tests/integration/cert/n3iwf.key(1 hunks)tests/integration/cert/nef.key(1 hunks)tests/integration/cert/nrf.key(1 hunks)tests/integration/cert/nssf.key(1 hunks)tests/integration/cert/pcf.key(1 hunks)tests/integration/cert/root.key(1 hunks)tests/integration/cert/smf.key(1 hunks)tests/integration/cert/tngf.key(1 hunks)tests/integration/cert/udm.key(1 hunks)tests/integration/cert/udr.key(1 hunks)tests/integration/config/ULCL/smfcfg.yaml(1 hunks)tests/integration/config/ULCL/uerouting.yaml(1 hunks)tests/integration/config/ULCL/upfcfg-i-upf.yaml(1 hunks)tests/integration/config/ULCL/upfcfg-psa-upf.yaml(1 hunks)tests/integration/config/amfcfg.yaml(1 hunks)tests/integration/config/ausfcfg.yaml(1 hunks)tests/integration/config/chfcfg.yaml(1 hunks)tests/integration/config/gnbcfg.yaml(1 hunks)tests/integration/config/gnbsimcfg.yaml(1 hunks)tests/integration/config/n3iwf-ipsec.sh(1 hunks)tests/integration/config/n3iwfcfg.yaml(1 hunks)tests/integration/config/n3uecfg.yaml(1 hunks)tests/integration/config/nefcfg.yaml(1 hunks)tests/integration/config/nrfcfg.yaml(1 hunks)tests/integration/config/nssfcfg.yaml(1 hunks)tests/integration/config/omnipathcfg.yaml(1 hunks)tests/integration/config/pcfcfg.yaml(1 hunks)tests/integration/config/smfcfg.yaml(1 hunks)tests/integration/config/tngfcfg.yaml(1 hunks)tests/integration/config/udmcfg.yaml(1 hunks)tests/integration/config/udrcfg.yaml(1 hunks)tests/integration/config/uecfg-ulcl.yaml(1 hunks)tests/integration/config/uecfg.yaml(1 hunks)tests/integration/config/uerouting.yaml(1 hunks)tests/integration/config/upf-iptables.sh(1 hunks)tests/integration/config/upfcfg.yaml(1 hunks)tests/integration/config/userinitcfg.yaml(1 hunks)tests/integration/config/webuicfg.yaml(1 hunks)tests/integration/docker-compose.yaml(1 hunks)tests/integration/sniffer/Dockerfile(1 hunks)tests/integration/sniffer/capturer.py(1 hunks)tests/integration/sniffer/h2_decoder/__init__.py(1 hunks)tests/integration/sniffer/h2_decoder/connection.py(1 hunks)tests/integration/sniffer/h2_decoder/decoder.py(1 hunks)tests/integration/sniffer/h2_decoder/stream.py(1 hunks)tests/integration/sniffer/h2_decoder/utils.py(1 hunks)tests/integration/sniffer/supervisord.conf(1 hunks)tests/integration/src/main.rs(1 hunks)tests/integration/src/network_capture.rs(1 hunks)tests/integration/src/tester.rs(1 hunks)tests/integration/user-init/Dockerfile(1 hunks)tests/integration/user-init/create-user.py(1 hunks)
🧬 Code Graph Analysis (2)
lightning-nf/omnipath/app/src/ngap/engine/controller.rs (1)
lightning-nf/omnipath/app/src/ngap/engine/utils.rs (1)
decode_ngap_pdu(38-46)
tests/integration/sniffer/capturer.py (2)
tests/integration/sniffer/h2_decoder/decoder.py (2)
HTTP2Decoder(13-380)process_tcp_packet(69-121)utils/client/src/nf_clients/mod.rs (1)
request(121-139)
🪛 Gitleaks (8.27.2)
tests/integration/cert/nssf.key
1-27: Identified a Private Key, which may compromise cryptographic security and sensitive data encryption.
(private-key)
tests/integration/cert/smf.key
1-27: Identified a Private Key, which may compromise cryptographic security and sensitive data encryption.
(private-key)
tests/integration/cert/n3iwf.key
1-27: Identified a Private Key, which may compromise cryptographic security and sensitive data encryption.
(private-key)
tests/integration/cert/tngf.key
1-27: Identified a Private Key, which may compromise cryptographic security and sensitive data encryption.
(private-key)
tests/integration/cert/root.key
1-27: Identified a Private Key, which may compromise cryptographic security and sensitive data encryption.
(private-key)
tests/integration/cert/amf.key
1-27: Identified a Private Key, which may compromise cryptographic security and sensitive data encryption.
(private-key)
tests/integration/cert/ausf.key
1-27: Identified a Private Key, which may compromise cryptographic security and sensitive data encryption.
(private-key)
tests/integration/cert/pcf.key
1-27: Identified a Private Key, which may compromise cryptographic security and sensitive data encryption.
(private-key)
tests/integration/cert/udm.key
1-27: Identified a Private Key, which may compromise cryptographic security and sensitive data encryption.
(private-key)
tests/integration/config/uecfg.yaml
9-9: Detected a Generic API Key, potentially exposing access to various services and sensitive operations.
(generic-api-key)
tests/integration/config/udmcfg.yaml
26-26: Detected a Generic API Key, potentially exposing access to various services and sensitive operations.
(generic-api-key)
29-29: Detected a Generic API Key, potentially exposing access to various services and sensitive operations.
(generic-api-key)
tests/integration/config/gnbsimcfg.yaml
60-60: Detected a Generic API Key, potentially exposing access to various services and sensitive operations.
(generic-api-key)
tests/integration/config/userinitcfg.yaml
14-14: Detected a Generic API Key, potentially exposing access to various services and sensitive operations.
(generic-api-key)
🪛 RuboCop (1.76.1)
deploy/base/Dockerfile.builder
[fatal] 6-6: unexpected token tIDENTIFIER
(Using Ruby 2.7 parser; configure using TargetRubyVersion parameter, under AllCops)
(Lint/Syntax)
[fatal] 6-6: unexpected token tIDENTIFIER
(Using Ruby 2.7 parser; configure using TargetRubyVersion parameter, under AllCops)
(Lint/Syntax)
[fatal] 7-7: unexpected token tIDENTIFIER
(Using Ruby 2.7 parser; configure using TargetRubyVersion parameter, under AllCops)
(Lint/Syntax)
[fatal] 8-8: unexpected token tIDENTIFIER
(Using Ruby 2.7 parser; configure using TargetRubyVersion parameter, under AllCops)
(Lint/Syntax)
[fatal] 9-9: unexpected token tIDENTIFIER
(Using Ruby 2.7 parser; configure using TargetRubyVersion parameter, under AllCops)
(Lint/Syntax)
[fatal] 10-10: unexpected token tIDENTIFIER
(Using Ruby 2.7 parser; configure using TargetRubyVersion parameter, under AllCops)
(Lint/Syntax)
[fatal] 16-16: unexpected token tIDENTIFIER
(Using Ruby 2.7 parser; configure using TargetRubyVersion parameter, under AllCops)
(Lint/Syntax)
🪛 YAMLlint (1.37.1)
tests/integration/config/omnipathcfg.yaml
[error] 58-58: trailing spaces
(trailing-spaces)
[error] 59-59: trailing spaces
(trailing-spaces)
tests/integration/docker-compose.yaml
[warning] 6-6: too many spaces after colon
(colons)
[error] 19-19: trailing spaces
(trailing-spaces)
[warning] 361-361: too many blank lines (1 > 0)
(empty-lines)
tests/integration/config/n3iwfcfg.yaml
[error] 60-60: no new line character at the end of file
(new-line-at-end-of-file)
tests/integration/config/gnbsimcfg.yaml
[error] 45-45: trailing spaces
(trailing-spaces)
tests/integration/config/userinitcfg.yaml
[error] 173-173: no new line character at the end of file
(new-line-at-end-of-file)
[error] 173-173: trailing spaces
(trailing-spaces)
🪛 Shellcheck (0.10.0)
tests/integration/config/n3iwf-ipsec.sh
[warning] 13-13: Quote this to prevent word splitting.
(SC2046)
🪛 Ruff (0.12.2)
tests/integration/sniffer/h2_decoder/stream.py
3-3: .utils.logger imported but unused
Remove unused import: .utils.logger
(F401)
tests/integration/sniffer/h2_decoder/decoder.py
2-2: collections.defaultdict imported but unused
Remove unused import
(F401)
2-2: collections.OrderedDict imported but unused
Remove unused import
(F401)
316-316: Local variable server_complete_before_current_frame is assigned to but never used
Remove assignment to unused variable server_complete_before_current_frame
(F841)
tests/integration/sniffer/capturer.py
10-10: scapy.layers.http.HTTP imported but unused
Remove unused import: scapy.layers.http.HTTP
(F401)
14-14: scapy.utils.wrpcap imported but unused
Remove unused import: scapy.utils.wrpcap
(F401)
15-15: from scapy.contrib.http2 import * used; unable to detect undefined names
(F403)
97-100: Use contextlib.suppress(CollectionInvalid) instead of try-except-pass
(SIM105)
203-203: Local variable e is assigned to but never used
Remove assignment to unused variable e
(F841)
330-330: Do not use mutable data structures for argument defaults
Replace with None; initialize within function
(B006)
559-559: f-string without any placeholders
Remove extraneous f prefix
(F541)
643-643: Local variable result is assigned to but never used
Remove assignment to unused variable result
(F841)
🧰 Additional context used
🧬 Code Graph Analysis (2)
lightning-nf/omnipath/app/src/ngap/engine/controller.rs (1)
lightning-nf/omnipath/app/src/ngap/engine/utils.rs (1)
decode_ngap_pdu(38-46)
tests/integration/sniffer/capturer.py (2)
tests/integration/sniffer/h2_decoder/decoder.py (2)
HTTP2Decoder(13-380)process_tcp_packet(69-121)utils/client/src/nf_clients/mod.rs (1)
request(121-139)
🪛 Gitleaks (8.27.2)
tests/integration/cert/nssf.key
1-27: Identified a Private Key, which may compromise cryptographic security and sensitive data encryption.
(private-key)
tests/integration/cert/smf.key
1-27: Identified a Private Key, which may compromise cryptographic security and sensitive data encryption.
(private-key)
tests/integration/cert/n3iwf.key
1-27: Identified a Private Key, which may compromise cryptographic security and sensitive data encryption.
(private-key)
tests/integration/cert/tngf.key
1-27: Identified a Private Key, which may compromise cryptographic security and sensitive data encryption.
(private-key)
tests/integration/cert/root.key
1-27: Identified a Private Key, which may compromise cryptographic security and sensitive data encryption.
(private-key)
tests/integration/cert/amf.key
1-27: Identified a Private Key, which may compromise cryptographic security and sensitive data encryption.
(private-key)
tests/integration/cert/ausf.key
1-27: Identified a Private Key, which may compromise cryptographic security and sensitive data encryption.
(private-key)
tests/integration/cert/pcf.key
1-27: Identified a Private Key, which may compromise cryptographic security and sensitive data encryption.
(private-key)
tests/integration/cert/udm.key
1-27: Identified a Private Key, which may compromise cryptographic security and sensitive data encryption.
(private-key)
tests/integration/config/uecfg.yaml
9-9: Detected a Generic API Key, potentially exposing access to various services and sensitive operations.
(generic-api-key)
tests/integration/config/udmcfg.yaml
26-26: Detected a Generic API Key, potentially exposing access to various services and sensitive operations.
(generic-api-key)
29-29: Detected a Generic API Key, potentially exposing access to various services and sensitive operations.
(generic-api-key)
tests/integration/config/gnbsimcfg.yaml
60-60: Detected a Generic API Key, potentially exposing access to various services and sensitive operations.
(generic-api-key)
tests/integration/config/userinitcfg.yaml
14-14: Detected a Generic API Key, potentially exposing access to various services and sensitive operations.
(generic-api-key)
🪛 RuboCop (1.76.1)
deploy/base/Dockerfile.builder
[fatal] 6-6: unexpected token tIDENTIFIER
(Using Ruby 2.7 parser; configure using TargetRubyVersion parameter, under AllCops)
(Lint/Syntax)
[fatal] 6-6: unexpected token tIDENTIFIER
(Using Ruby 2.7 parser; configure using TargetRubyVersion parameter, under AllCops)
(Lint/Syntax)
[fatal] 7-7: unexpected token tIDENTIFIER
(Using Ruby 2.7 parser; configure using TargetRubyVersion parameter, under AllCops)
(Lint/Syntax)
[fatal] 8-8: unexpected token tIDENTIFIER
(Using Ruby 2.7 parser; configure using TargetRubyVersion parameter, under AllCops)
(Lint/Syntax)
[fatal] 9-9: unexpected token tIDENTIFIER
(Using Ruby 2.7 parser; configure using TargetRubyVersion parameter, under AllCops)
(Lint/Syntax)
[fatal] 10-10: unexpected token tIDENTIFIER
(Using Ruby 2.7 parser; configure using TargetRubyVersion parameter, under AllCops)
(Lint/Syntax)
[fatal] 16-16: unexpected token tIDENTIFIER
(Using Ruby 2.7 parser; configure using TargetRubyVersion parameter, under AllCops)
(Lint/Syntax)
🪛 YAMLlint (1.37.1)
tests/integration/config/omnipathcfg.yaml
[error] 58-58: trailing spaces
(trailing-spaces)
[error] 59-59: trailing spaces
(trailing-spaces)
tests/integration/docker-compose.yaml
[warning] 6-6: too many spaces after colon
(colons)
[error] 19-19: trailing spaces
(trailing-spaces)
[warning] 361-361: too many blank lines (1 > 0)
(empty-lines)
tests/integration/config/n3iwfcfg.yaml
[error] 60-60: no new line character at the end of file
(new-line-at-end-of-file)
tests/integration/config/gnbsimcfg.yaml
[error] 45-45: trailing spaces
(trailing-spaces)
tests/integration/config/userinitcfg.yaml
[error] 173-173: no new line character at the end of file
(new-line-at-end-of-file)
[error] 173-173: trailing spaces
(trailing-spaces)
🪛 Shellcheck (0.10.0)
tests/integration/config/n3iwf-ipsec.sh
[warning] 13-13: Quote this to prevent word splitting.
(SC2046)
🪛 Ruff (0.12.2)
tests/integration/sniffer/h2_decoder/stream.py
3-3: .utils.logger imported but unused
Remove unused import: .utils.logger
(F401)
tests/integration/sniffer/h2_decoder/decoder.py
2-2: collections.defaultdict imported but unused
Remove unused import
(F401)
2-2: collections.OrderedDict imported but unused
Remove unused import
(F401)
316-316: Local variable server_complete_before_current_frame is assigned to but never used
Remove assignment to unused variable server_complete_before_current_frame
(F841)
tests/integration/sniffer/capturer.py
10-10: scapy.layers.http.HTTP imported but unused
Remove unused import: scapy.layers.http.HTTP
(F401)
14-14: scapy.utils.wrpcap imported but unused
Remove unused import: scapy.utils.wrpcap
(F401)
15-15: from scapy.contrib.http2 import * used; unable to detect undefined names
(F403)
97-100: Use contextlib.suppress(CollectionInvalid) instead of try-except-pass
(SIM105)
203-203: Local variable e is assigned to but never used
Remove assignment to unused variable e
(F841)
330-330: Do not use mutable data structures for argument defaults
Replace with None; initialize within function
(B006)
559-559: f-string without any placeholders
Remove extraneous f prefix
(F541)
643-643: Local variable result is assigned to but never used
Remove assignment to unused variable result
(F841)
🔇 Additional comments (46)
Cargo.toml (2)
2-2: Path wildcard removal – verify all Omnipath crates still build
"lightning-nf/omnipath/*"➜"lightning-nf/omnipath/app"narrows workspace membership. Any sibling crates (e.g., common libraries, proc-macros) are now excluded and may break dependants using relative paths. Double-checkcargo check --all-targets.
7-8: Edition 2024 requires nightly – confirm CI toolchain
edition = "2024"is only available on nightly as of Rust 1.88. Ensure your CI/dev images (and the new builder image) use a nightly toolchain; otherwise builds will fail..gitignore (1)
74-76: Git already contains certs despite new ignore rule
.gitignorenow skipstests/integration/cert/*, but the repository already tracks several key/cert files (seenssf.key). Git will continue to track existing entries. You must remove them from history (git rm --cached) or the ignore has no effect.config/amfcfg.yaml (1)
48-48: LGTM!The NRF URI update to the local test environment IP is appropriate for the integration testing setup.
tests/integration/Cargo.toml (1)
1-20: Verify workspace linkage and update dependency versions
version.workspace = trueand other workspace keys require this directory to be inside the root workspace.
If not,cargo buildwill fail. Confirm that the rootCargo.tomlcontains[workspace].
tokio = "1.0"locks you to 1.0.x; current stable is 1.37. Using"1"keeps you on the latest compatible minor releases without breaking semver guarantees.
pcap = "2.2.0"is several years old; upstreampcapcrate is at 0.10.x (re-licensed, safety improvements). Double-check the crate name/version you intend.tests/integration/config/n3iwf-ipsec.sh (1)
5-15: Commented IP (10.0.0.1) differs from configured IP (10.100.200.15/24)The address in the explanatory comment (
IPSecInterfaceAddress: 10.0.0.1) does not match the actualip addr addcommand (10.100.200.15/24).
Confirm which value is correct and keep comments & code in sync.tests/integration/config/webuicfg.yaml (1)
9-11: HTTP used while a TLS certificate path is configured
nrfUriishttp://…, yetnrfCertPempoints to a certificate.
If NRF is expected to run over HTTPS, switch the scheme tohttps://(and update port) or drop the unused certificate path.tests/integration/config/ULCL/upfcfg-i-upf.yaml (1)
23-27: Duplicate DNN entries may cause ambiguous routingTwo
internetentries with different CIDRs are listed.
Free5GC treats DNN as a key; duplicates can override or confuse SMF->UPF selection. Confirm this is intentional.tests/integration/config/upfcfg.yaml (1)
22-29: Duplicatednn: internetblocksSame concern as the I-UPF config: two identical DNN names with different pools can be problematic unless ULCL rules disambiguate. Please verify.
.dockerignore (1)
4-5: Ignoringdeploy/and root-levelDockerfilecan breakdocker-bake.hclbuilds
docker-bake.hcland CI recipes typically rely on those files. With them excluded from build context, targets may fail to resolve theirDockerfiles.If they need to remain in context, drop these two lines:
-deploy/ -Dockerfiletests/integration/config/nefcfg.yaml (1)
8-9:registerIPv4/bindingIPv4contain FQDNs, not IPv4 literals.These fields are parsed by free5gc exactly as the name suggests – an IPv4 address string (e.g.
192.168.1.10). Supplying a hostname will be rejected at runtime when the NF tries to marshal the NF profile for NRF registration.
Resolve the FQDNs to concrete IPs (or vendor-patch free5gc to accept hostnames).tests/integration/config/udrcfg.yaml (1)
8-9:registerIPv4/bindingIPv4are hostnames.
UDR will fail NF registration for the same reason outlined earlier – supply raw IPv4 addresses.tests/integration/config/nrfcfg.yaml (3)
20-22: PLMN configuration is appropriate for testing.The MCC/MNC values (208/93) are suitable for integration testing environments.
23-25: Service configuration follows 3GPP standards.The NRF service list correctly includes the standard Nnrf_NFManagement and Nnrf_NFDiscovery services as per TS 29.510.
27-30: Logging configuration is appropriate for integration testing.The log level and settings are well-suited for the testing environment.
tests/integration/config/uecfg.yaml (4)
8-15: Static analysis false positive: These are 5G authentication keys, not API keys.The static analysis tool flagged the
keyfield as a potential API key, but this is a false positive. This is a 5G subscriber authentication key (K key) and operator code (OP/OPC) used for cellular network authentication.For integration testing, ensure these are well-known test keys and never use production subscriber keys in test configurations.
16-19: Device identifiers are properly formatted.IMEI and IMEISV values follow the correct format and provide appropriate fallback identification for the test UE.
65-75: Security algorithms configuration is comprehensive.All standard 5G integrity and ciphering algorithms are enabled, providing good test coverage for different security scenarios.
77-80: Integrity protection configuration enables comprehensive security testing.Setting both uplink and downlink to "full" provides maximum security coverage for integration testing scenarios.
tests/integration/config/uerouting.yaml (2)
39-41: Route profile configuration is straightforward and appropriate.The mapping between RouteProfileID "MEC1" and ForwardingPolicyID 10 is clear and suitable for MEC testing scenarios.
43-48: PFD configuration correctly defines edge application traffic flows.The packet flow description appropriately defines traffic rules for the edge application, using standard IP filter syntax for outbound traffic from the edge server.
tests/integration/config/ULCL/uerouting.yaml (2)
5-18: ULCL routing configuration is appropriately simplified for focused testing.This configuration provides a streamlined setup with a single UE group and simplified topology, suitable for ULCL-specific test scenarios. The structure remains consistent with the main routing configuration.
20-25: PFD configuration aligns well with ULCL test scenarios.The application flow description correctly defines traffic from the specific destination IP (1.0.0.1/32) to the broader network range (10.60.0.0/16), which is appropriate for ULCL functionality testing.
lightning-nf/omnipath/app/src/context/app_context.rs (2)
73-73: Method name change correctly reflects generalized IP support.The change from
get_ipv4_uri()toget_ip_uri()appropriately reflects the expansion to support both IPv4 and IPv6 addresses.
81-92: IP endpoint construction logic is sound, but please verifyIpEndPointdefinitionThe pattern matching correctly handles both IPv4 and IPv6, and TCP is appropriate for SBI services. However, I couldn’t locate the
IpEndPointstruct definition in the local codebase. Please confirm:
- The
IpEndPointstruct (wherever it’s defined or imported) includes anipv6_address: Option<…>field.- Its
Defaultimplementation still provides valid defaults for all other fields.File & location to check:
lightning-nf/omnipath/app/src/context/app_context.rsaround lines 81–92.lightning-cli/src/nf_type.rs (3)
30-30: LGTM - Good practice to initialize tracing early.Moving tracing initialization to the start of the function ensures logging is available throughout the application lifecycle, which improves debugging and observability.
87-90: LGTM - Improved error message formatting.The multi-line formatting makes the error message more readable in the source code while maintaining the same runtime behavior.
131-131: LGTM - Enhanced application lifecycle logging.The addition of informational logs for initialization start and execution completion improves observability and helps with monitoring application state transitions.
Also applies to: 143-143
tests/integration/config/ausfcfg.yaml (2)
18-22: LGTM - Valid PLMN configuration for testing.The PLMN support list contains appropriate test values with valid MCC/MNC format for integration testing scenarios.
12-13: Certificate files verified in integration testsThe referenced certificate files (ausf.pem, ausf.key, nrf.pem, nrf.key) are present under tests/integration/cert/, so the relative paths in ausfcfg.yaml are correct. No changes needed.
tests/integration/config/pcfcfg.yaml (2)
28-30: LGTM - MongoDB configuration appropriate for testing.The MongoDB configuration uses a simple connection string suitable for integration testing. Ensure authentication and encryption are properly configured for production environments.
22-22: Feature flags follow 3GPP specification format.The suppFeat values (3fff, 3) are hex-encoded feature bitmasks as specified in TS 29.571 5.2.2, which is the correct format for 3GPP service feature negotiation.
Also applies to: 25-25
lightning-nf/omnipath/app/src/builder/sbi/nrf.rs (1)
36-43: LGTM - Excellent generalization to support both IPv4 and IPv6.The pattern matching approach properly handles both IPv4 and IPv6 addresses, improving the flexibility of the NF profile builder while maintaining type safety. This aligns well with the broader IP address generalization across the codebase.
tests/integration/config/chfcfg.yaml (1)
8-15: HTTP scheme + TLS flags look contradictory
sbi.schemeis set tohttp, yettls.pem/keyare populated. If the CHF is expected to speak plain HTTP you can drop the TLS block; if HTTPS is required, switchschemetohttpsand ensure the port reflects that. Mixing the two will confuse service discovery and may break mutual-TLS with NRF.tests/integration/config/udmcfg.yaml (1)
24-31: Hard-coded cryptographic material detectedGitleaks flags the private/public key pairs in the
SuciProfilesection.
They are standard test vectors (TS 33.501 Annex C.4), so exposure is probably harmless, but please confirm no real subscriber keys are included before merging to a public repo.tests/integration/config/tngfcfg.yaml (1)
1-59: LGTM! Comprehensive TNGF configuration.The TNGF configuration is well-structured with appropriate network settings, security configurations, and logging options. The IP addressing scheme is consistent with the integration test environment.
tests/integration/sniffer/h2_decoder/connection.py (2)
8-26: LGTM! Well-structured HTTP/2 connection state management.The constructor properly initializes all necessary components for HTTP/2 connection handling, including separate HPACK decoders for each direction and appropriate default settings.
28-43: Excellent error handling and UTF-8 conversion.The
decode_headersmethod demonstrates good practices:
- Proper selection of decoder based on direction
- Safe UTF-8 decoding with error replacement
- Comprehensive exception handling with logging
- Graceful fallback returning empty list on errors
tests/integration/docker-compose.yaml (2)
14-19: Security consideration: Privileged mode usage.The capturer service uses
privileged: truewhich grants extensive system access. While this is likely necessary for packet capture operations, ensure this is the minimal privilege required.Consider whether specific capabilities would be sufficient instead of full privileged mode:
cap_add: - NET_RAW - NET_ADMIN - SYS_ADMINAre these specific capabilities sufficient for your packet capture requirements, or is full privileged mode truly necessary?
1-351: LGTM! Comprehensive 5G network simulation orchestration.The Docker Compose configuration effectively orchestrates a complete 5G core network simulation with:
- Proper service dependencies and health checks
- Consistent IP addressing scheme across services
- Appropriate volume mounts for configuration and certificates
- Well-structured network isolation
tests/integration/config/gnbsimcfg.yaml (2)
1-72: LGTM! Well-structured GNBSIM configuration.The configuration provides comprehensive settings for gNodeB simulation including:
- Proper PLMN and tracking area configuration
- Consistent network addressing with the Docker environment
- Appropriate subscriber and slice configuration for testing
59-61: Standard 3GPP test keys only used in integration testsAll occurrences of these credentials are confined to
tests/integration/config/*and match the well-known 3GPP TS 33.501 test vectors—not production secrets:• tests/integration/config/userinitcfg.yaml
• tests/integration/config/uecfg-ulcl.yaml
• tests/integration/config/uecfg.yaml
• tests/integration/config/gnbsimcfg.yamlNo other references to these values exist in the codebase. You can safely leave them as-is for your GNBSIM integration tests.
tests/integration/sniffer/h2_decoder/stream.py (1)
9-69: LGTM! Excellent HTTP/2 stream implementation.The
HTTP2Streamclass demonstrates good design principles:
- Clear separation of client and server state
- Proper handling of headers vs trailers distinction
- Useful helper methods for extracting HTTP pseudo-headers
- Clean state management with completion tracking
The implementation correctly follows HTTP/2 stream semantics and provides a solid foundation for the packet decoder.
lightning-nf/omnipath/app/src/ngap/engine/controller.rs (2)
108-113: Good error handling improvementThe change from fire-and-forget to explicit result handling improves observability and allows proper cleanup on normal termination.
254-300: Well-structured refactoring of the NGAP processing loopThe refactoring improves:
- Error handling with explicit error mapping
- Control flow clarity with explicit loop and pattern matching
- Observability with added logging of received PDUs
- Graceful termination by returning
Ok(())when the socket closestests/integration/src/tester.rs (1)
231-248: LGTM!The packet flow printing function is well-implemented with clear formatting and helpful visual indicators.
| RUN apt-get update && apt-get install -y \ | ||
| pkg-config \ | ||
| libssl-dev \ | ||
| libclang-dev \ | ||
| clang \ | ||
| libsctp-dev | ||
|
|
||
| RUN cargo install cargo-chef | ||
|
|
||
| # Clean apt cache | ||
| RUN apt-get clean No newline at end of file |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Hardening & layer optimisation for builder image
apt-get installwithout--no-install-recommendspulls unnecessary packages.apt-get cleanalone leaves/var/lib/apt/listsbehind, bloating the image.cargo install cargo-chefis unpinned; builds may break on future releases.
Suggested refinement:
-RUN apt-get update && apt-get install -y \
- pkg-config \
- libssl-dev \
- libclang-dev \
- clang \
- libsctp-dev
-
-RUN cargo install cargo-chef
-
-# Clean apt cache
-RUN apt-get clean
+RUN set -eux; \
+ apt-get update; \
+ apt-get install -y --no-install-recommends \
+ pkg-config \
+ libssl-dev \
+ libclang-dev \
+ clang \
+ libsctp-dev; \
+ rm -rf /var/lib/apt/lists/*
+
+# Pin cargo-chef for reproducibility
+RUN cargo install cargo-chef --locked --version 0.1.64This reduces size, improves reproducibility, and passes typical container-lint checks.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| RUN apt-get update && apt-get install -y \ | |
| pkg-config \ | |
| libssl-dev \ | |
| libclang-dev \ | |
| clang \ | |
| libsctp-dev | |
| RUN cargo install cargo-chef | |
| # Clean apt cache | |
| RUN apt-get clean | |
| RUN set -eux; \ | |
| apt-get update; \ | |
| apt-get install -y --no-install-recommends \ | |
| pkg-config \ | |
| libssl-dev \ | |
| libclang-dev \ | |
| clang \ | |
| libsctp-dev; \ | |
| rm -rf /var/lib/apt/lists/* | |
| # Pin cargo-chef for reproducibility | |
| RUN cargo install cargo-chef --locked --version 0.1.64 |
🧰 Tools
🪛 RuboCop (1.76.1)
[fatal] 6-6: unexpected token tIDENTIFIER
(Using Ruby 2.7 parser; configure using TargetRubyVersion parameter, under AllCops)
(Lint/Syntax)
[fatal] 6-6: unexpected token tIDENTIFIER
(Using Ruby 2.7 parser; configure using TargetRubyVersion parameter, under AllCops)
(Lint/Syntax)
[fatal] 7-7: unexpected token tIDENTIFIER
(Using Ruby 2.7 parser; configure using TargetRubyVersion parameter, under AllCops)
(Lint/Syntax)
[fatal] 8-8: unexpected token tIDENTIFIER
(Using Ruby 2.7 parser; configure using TargetRubyVersion parameter, under AllCops)
(Lint/Syntax)
[fatal] 9-9: unexpected token tIDENTIFIER
(Using Ruby 2.7 parser; configure using TargetRubyVersion parameter, under AllCops)
(Lint/Syntax)
[fatal] 10-10: unexpected token tIDENTIFIER
(Using Ruby 2.7 parser; configure using TargetRubyVersion parameter, under AllCops)
(Lint/Syntax)
[fatal] 16-16: unexpected token tIDENTIFIER
(Using Ruby 2.7 parser; configure using TargetRubyVersion parameter, under AllCops)
(Lint/Syntax)
🤖 Prompt for AI Agents
In deploy/base/Dockerfile.builder around lines 6 to 16, the apt-get install
command should include the --no-install-recommends flag to avoid installing
unnecessary packages, reducing image size. After installing packages, remove the
/var/lib/apt/lists directory to clean up apt cache fully and prevent image
bloat. Additionally, pin the version of cargo-chef during installation to ensure
build reproducibility and prevent future breakage. Update the Dockerfile
accordingly to implement these improvements.
| RUN apt-get update && apt-get install -y \ | ||
| libssl3 \ | ||
| ca-certificates \ | ||
| libsctp1 | ||
|
|
||
| # Clean apt cache | ||
| RUN apt-get clean No newline at end of file |
There was a problem hiding this comment.
Combine RUN commands to optimize Docker layers
The apt-get update, install, and clean commands should be combined into a single RUN instruction to minimize layers and ensure the apt cache is cleaned in the same layer where it's created.
-# Install runtime dependencies
-RUN apt-get update && apt-get install -y \
- libssl3 \
- ca-certificates \
- libsctp1
-
-# Clean apt cache
-RUN apt-get clean
+# Install runtime dependencies
+RUN apt-get update && apt-get install -y \
+ libssl3 \
+ ca-certificates \
+ libsctp1 && \
+ apt-get clean && \
+ rm -rf /var/lib/apt/lists/*📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| RUN apt-get update && apt-get install -y \ | |
| libssl3 \ | |
| ca-certificates \ | |
| libsctp1 | |
| # Clean apt cache | |
| RUN apt-get clean | |
| # Install runtime dependencies | |
| RUN apt-get update && apt-get install -y \ | |
| libssl3 \ | |
| ca-certificates \ | |
| libsctp1 && \ | |
| apt-get clean && \ | |
| rm -rf /var/lib/apt/lists/* |
🤖 Prompt for AI Agents
In deploy/base/Dockerfile.executor around lines 6 to 12, the apt-get update,
install, and clean commands are split into separate RUN instructions, creating
unnecessary Docker layers and leaving the apt cache in intermediate layers.
Combine these commands into a single RUN instruction by chaining apt-get update,
apt-get install with the packages, and apt-get clean together using &&, so the
cache is cleaned in the same layer it was created, optimizing the image size and
layer count.
| FROM golang:1.24.4-bookworm AS builder | ||
|
|
||
| RUN apt-get update && \ | ||
| apt-get -y install --no-install-recommends \ | ||
| vim \ | ||
| ethtool \ | ||
| git && \ | ||
| apt-get clean | ||
|
|
||
| WORKDIR $GOPATH/src/gnbsim | ||
|
|
||
| ARG VERSION | ||
|
|
||
| # Clone the repository instead of copying context files | ||
| RUN git clone https://github.com/omec-project/gnbsim.git . | ||
| RUN git checkout v$VERSION | ||
| RUN make all |
There was a problem hiding this comment.
ARG VERSION defaults to nothing – build breaks silently.
If the caller forgets to pass --build-arg VERSION=1.6.3, git checkout v errors late.
Provide a sane default and pin depth:
-ARG VERSION
+ARG VERSION=1.6.3
@@
-RUN git clone https://github.com/omec-project/gnbsim.git .
-RUN git checkout v$VERSION
+RUN git clone --depth 1 --branch v$VERSION \
+ https://github.com/omec-project/gnbsim.git .🤖 Prompt for AI Agents
In deploy/gnbsim/Dockerfile around lines 7 to 23, the ARG VERSION has no default
value, causing the git checkout command to fail silently if VERSION is not
passed during build. Fix this by assigning a sensible default value to VERSION
in the ARG declaration and modify the git clone command to use a shallow clone
with a pinned depth to improve build efficiency and reliability.
tests/integration/cert/nssf.key
Outdated
| -----BEGIN RSA PRIVATE KEY----- | ||
| MIIEpAIBAAKCAQEAwsrvEP7x76q1/dMEovHMHDowdcpHUfffSsLHiHE/fNpu+8FJ | ||
| 3l0M4sBtBkoZ4ymlJOtUGGAbRKSy9CB01WfEd/zSeoxbk+oxZVLlopXJMaowTmiw | ||
| fmqrhHTf4+gOzWLl3Qws19FiQrEpOPPUIRLejXjVt1iSjpUBqklfk6qme2ajv7SW | ||
| AWZ2J5JcOAvnPHCdiW0LhR02veU3c/Hjcvz1tvbUYPHdUdfT4Z78qLHNG7R033E5 | ||
| 9GoVcmQTGvBOGkjTO+7tuzoB6uV/iNsIZ62xh2NUKpoi/2NHO5uzN9XvNnGBPWPM | ||
| 6vEuIG7RTgLZXkjr1ntRDPjTxR9uxbOTtoKlhQIDAQABAoIBABKPzeobsSg6Zy8x | ||
| vX5ddiq2bMm8pcEs0o1INEVPLVKTagH5cDWNPl4ZIgp4YzEfVcpH1nKgLdkDrL3Q | ||
| nfF7+t/IKrJcz5kvttdF5/XvOgXOht3h4feZ6JAtpuwLv+VYaDkBIQDLxgY+5YO0 | ||
| 0GD6cFSg0BSxjv2hO1Bjv4TNvSGDAr+b1qIPJc+hJfxTYksPqijATOAov1oWfntZ | ||
| o5bwu2sZYYBaMN3mWNT5RI+0LPmEvBpS4zasFjmGzIuhGiW9+Qy7n6zE1rjyOjt7 | ||
| aXqQmi1MpchLEEZ8YKZ5yCqse+LPnOVo41K02n9cRbpyx2PlOEzaZAhf4OhHAux0 | ||
| 7PMvk8ECgYEA52jiR4GnrP5k57WL5PWW/d71uCnkB7bsAA5plhA/L8M3cBtH1D10 | ||
| XzxhJsgD4gzvOBbHdzlQlv1qaSEonXiBfDRYI6vE2jdF12/P1FnRn6itAsPyv8+B | ||
| eTugbtK1QyOUUFbWidluKYZyc8NBshdQs2MAGrxEIPyp9RsEWoElmXECgYEA133y | ||
| EO6ZSF7hOWtyajdbRl3U7W3ueVAcZ+XAwHg/1CR/DKP10nk41YoPKen+2eTtwPuz | ||
| FcJiTSiQTz24NC8oa1/HJOJisCzsY0hnr0eIfv8vh8KGZ+twNURMfRHDOgUHbPRA | ||
| k07Ua9qElLBS+K6IjFHBw/R5x7lV/Aut+h2wY1UCgYEA2yAcNS33Av8ZubfVsodD | ||
| 5T39W9h/DzcNnrgHD4xu6OFoW4XZrIZM7emU7U41LxakZ25ajvD8+R0aaFZ3yTVV | ||
| F7HRTGXYz/HK9sfv+traft8QhS/ba/ooVg46SvbHob3NMY4pNPtpL3UaZ7gbO3KM | ||
| fnFepfJkaQAYR/q/Upw8aGECgYAU8R9yN83bs6TNsLx4yBJtvzavMEzJq1hnXHOk | ||
| AMurMTTLpK+CGdfFINqiZyQmRtI4O/3GcpKdDwEfQk9BewfBj5hJ5FpU5ri+WU4C | ||
| QcBpHyGeXvAy7zWDZ7TLPsurOsvi8cXIyxNnu9/Al3YoxsVsZZGgGoUvPLq3eOca | ||
| dIIs1QKBgQC0eTeBuEDkTc0lFSeIW+t5l5Bq02sNKnciMpdXuIabkqm0d86BhxkK | ||
| FQuSr0KA2oAj0x5aB8V7c04uAbUOevRzh7f0ol0G4Eb6DyWJUKHlYZsPKVZASa3D | ||
| EZhtUy9ZHfLiMm7/qxGMZNjWJB56Pi52I6iJbeQGxF5DFXMvgAmrcw== | ||
| -----END RSA PRIVATE KEY----- |
There was a problem hiding this comment.
Remove committed private key – serious security & licensing risk
A real-looking 2048-bit RSA private key is now tracked in Git. Even for testbeds, shipping private keys in-repo:
• breaches the principle of ephemeral secrets
• invalidates any future reuse of the certificate (the key is now public)
• may violate company security/compliance policies and license constraints
Unless this key is demonstrably dummy and never reused outside CI, replace it with a generated fixture (e.g., created in docker-compose up) or fetch via environment/secret store, then scrub the history.
🧰 Tools
🪛 Gitleaks (8.27.2)
1-27: Identified a Private Key, which may compromise cryptographic security and sensitive data encryption.
(private-key)
🤖 Prompt for AI Agents
In tests/integration/cert/nssf.key lines 1 to 27, a real-looking RSA private key
is committed, posing a serious security and compliance risk. Remove this private
key file from the repository, replace it with a dummy or generated key fixture
created dynamically during test setup (e.g., in docker-compose or via
environment variables/secret store), and ensure the git history is scrubbed to
remove the committed key from past commits.
tests/integration/cert/root.key
Outdated
| -----BEGIN PRIVATE KEY----- | ||
| MIIEpAIBAAKCAQEA4ccG+PxGQ02YN2lKFecv0+YOPKzF/I3UdsN6yuWJAtjKaR3P | ||
| G8ERLugApak/tySXdaurEau1sMD1glCLzl18Ax/2D9SvStbQYpAb6NNQq1Td5K/k | ||
| d2omFNGepyfatGybftxDj+KNEi1AClsgNI/5KETmJzHuR9l0IZHX822UemHmVa8e | ||
| 0p6rjjvJfuSRtPMnY08Z01lWPi+aeVFXK2kbsKWcaDsF0u39Vz+4Ap4/invqO+6O | ||
| Hwg9MttkPK9GKaTuqxjNaXY5zBYDCfXdZw841AD4nbvFCh1N8BEh5ND0xjffeh/w | ||
| 7ECs1oDGpXdH8s5NQf0OMf5WnQR9wTRbc9WJWwIDAQABAoIBAG0atr3fezzO5H3y | ||
| fYC0JaKqi7q6azqudzulH1OpNFqR1M6TPyBUXi9iod1sOoQQRLjyakqxxa1SsdF1 | ||
| hvoEpwcDjO79gnAR+sWGilYqt9Lzwf7fGPKz8w8gdSCl28RQr5vQaT3A5xcfktDu | ||
| VFPSNz9EfEv4jwNdjAck6UyTaarxd/eEBbbSLzulmU8i8ioiOjegZ2XLUdZu8e2e | ||
| ZWu2KSfhtKT6W77+3Wjk338yq1pYcKYDZ4myX47IxBgJ9PjYwAFmKLv9+nMbWhdo | ||
| O7WWW5XqAq0mdTAXoPjckGx5FQ+8pmQU73GFe8C7vPphsfmMfp8+P+2mgo0XP5Mk | ||
| ybTgxXECgYEA8QPUmlm+9WDSOjGfZgsA2ebUqBXisQzvPAmsKrN5knsKB8htESTA | ||
| jE/3bYhY3270V1v6mtzRR+gWz61cB13+zh7oNBZv6JrE18XIa6Ujoyi7mHD2/O/L | ||
| YBokpO7XFrZUwIn4xQKYUOAKfmblnqneaegUonBcPVaAfLiiQfLL/T0CgYEA79Cq | ||
| WMyNutGP/xrcPO/2HX0KAz0su+ibqNCLwE/xTqtb8bDFfqcCve97kY/mleLRgyQU | ||
| PSG0QZsU35EirBu9uUHam2iZ+vuhBd3W7V6jz6QD8bbndN9Bo/trDAcmKwJPOgcR | ||
| E1HHrhaCMwyC6hIxWIN3KeC336kV86kSKl5fOncCgYEAhVJmH//3N0uY32Il18/M | ||
| Z71d86EvplsOV7rOB00CujnfwZBobOxUqwltSKT87JQ3C8Awr8vo5QsI21b9qoGf | ||
| eyvzrSSgQR6kNuwMwyc1wIK+y0PqDN8lNIMhtC7+wmyYE23/NBQMZ+iXRXbsHEnB | ||
| Y/pit7zJ+ZBJQTTtZOfHWc0CgYEAncddrBq4zMb/xEl5ySXfeTtBESx4eBDsCL9c | ||
| kuSF0MpxLayt40wiQ7CCThytCsQ2L9/W6Pl5zQgpTYYBl9aQlog7ObAyTqyrHl77 | ||
| /yQbBovUhH1f/sa8OxU5Mqqpc/aLBZSMqXdOtrzcYJXEtBfQG3ozNnxAvYhThwIf | ||
| j3yTjrUCgYBqZYCwtZqUFJPhQOtbXyQO7Aw4o7fnGE9Dlk0cfROP8/qzf2EGKtNr | ||
| wkaGBUdaOVyKCgKAcZR0cKOTTYWfVrXO1q3bmZpxW+yiIVXFCYVLfex6c+6mUNq1 | ||
| Ae1f+ENYdal4LSc/L3ThnDuGXngGrmjOxLTIBFGY2pLC2SgF76N9XA== | ||
| -----END PRIVATE KEY----- |
There was a problem hiding this comment.
Private key committed to VCS – remove or relocate immediately
A real/private RSA key is now part of the repository history. This poses an immediate security & compliance risk (Gitleaks flag confirms). Even if only used for local tests, the key can be harvested, reused inadvertently in prod, or trigger upstream supply-chain scans.
Recommended actions:
- Delete the key from the repo history (
git rm --cached …, then force-push after a history rewrite) or move to a secrets store generated at runtime. - If you truly need a static test key, commit only an encrypted or password-protected variant, or script
openssl genrsa …during container start-up. - Add
tests/integration/cert/*.keyto.gitignore/.gitattributes export-ignoreso it never re-appears.
Failing to strip this file will block open-sourcing, vendor scans, and CI policy gates.
🧰 Tools
🪛 Gitleaks (8.27.2)
1-27: Identified a Private Key, which may compromise cryptographic security and sensitive data encryption.
(private-key)
🤖 Prompt for AI Agents
In tests/integration/cert/root.key lines 1 to 27, a private RSA key has been
committed to version control, which is a serious security risk. Remove this key
from the repository history using git commands like git rm --cached followed by
a history rewrite and force push. Replace the key with a secure alternative such
as generating it at runtime or using an encrypted/password-protected version for
tests. Also, update .gitignore and .gitattributes to exclude all .key files in
tests/integration/cert to prevent future accidental commits.
| use pcap::{Packet, PacketHeader}; | ||
| use std::collections::HashMap; | ||
| use std::net::Ipv4Addr; | ||
|
|
||
| use std::{path::Path, process::Command, time::Duration}; | ||
|
|
||
| use anyhow::{Context, Result}; | ||
| use pcap::{Active, Capture, Device}; | ||
|
|
||
| // Import the tester module | ||
|
|
There was a problem hiding this comment.
Remove duplicate and reorganize imports
The imports are duplicated and disorganized. Lines 1-3 and 7-8 contain pcap-related imports that are duplicated.
-use pcap::{Packet, PacketHeader};
-use std::collections::HashMap;
-use std::net::Ipv4Addr;
-
-use std::{path::Path, process::Command, time::Duration};
-
-use anyhow::{Context, Result};
-use pcap::{Active, Capture, Device};
-
-// Import the tester module
-
+use anyhow::{Context, Result};
+use pcap::{Active, Capture, Device, Packet, PacketHeader};
+use std::collections::HashMap;
+use std::net::Ipv4Addr;
+use std::time::Duration;Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In tests/integration/src/tester.rs lines 1 to 11, there are duplicated imports
of pcap items and disorganized import statements. Remove the duplicate pcap
imports by consolidating them into a single import statement and reorder all
imports to group related items together logically, such as standard library
imports, external crate imports, and local module imports, to improve clarity
and maintainability.
| // Global constants for 5G Network Functions IP addresses | ||
| pub const AMF_IP: &str = "10.0.0.2"; // Access and Mobility Management Function | ||
| pub const SMF_IP: &str = "10.0.0.3"; // Session Management Function | ||
| pub const UPF_IP: &str = "10.0.0.4"; // User Plane Function | ||
| pub const NRF_IP: &str = "10.0.0.5"; // Network Repository Function | ||
| pub const AUSF_IP: &str = "10.0.0.6"; // Authentication Server Function | ||
| pub const PCF_IP: &str = "10.0.0.7"; // Policy Control Function | ||
| pub const NSSF_IP: &str = "10.0.0.8"; // Network Slice Selection Function | ||
| pub const UDM_IP: &str = "10.0.0.9"; // Unified Data Management | ||
| pub const UDR_IP: &str = "10.0.0.10"; // Unified Data Repository | ||
| pub const BSF_IP: &str = "10.0.0.11"; // Binding Support Function | ||
| pub const CHF_IP: &str = "10.0.0.12"; // Charging Function | ||
| pub const SMSF_IP: &str = "10.0.0.13"; // Short Message Service Function | ||
| pub const N3IWF_IP: &str = "10.0.0.14"; // Non-3GPP Interworking Function | ||
| pub const SEPP_IP: &str = "10.0.0.15"; // Security Edge Protection Proxy | ||
| pub const NWDAF_IP: &str = "10.0.0.16"; // Network Data Analytics Function | ||
| pub const GMLC_IP: &str = "10.0.0.17"; // Gateway Mobile Location Centre | ||
| pub const SCEF_IP: &str = "10.0.0.18"; // Service Capability Exposure Function | ||
| pub const EIR_IP: &str = "10.0.0.19"; // Equipment Identity Register | ||
| pub const UDSF_IP: &str = "10.0.0.20"; // Unstructured Data Storage Function | ||
| pub const LMF_IP: &str = "10.0.0.21"; // Location Management Function | ||
| pub const MBSF_IP: &str = "10.0.0.22"; // Multicast Broadcast Service Function | ||
| pub const NAF_IP: &str = "10.0.0.23"; // Network Application Function | ||
| pub const NEF_IP: &str = "10.0.0.24"; // Network Exposure Function | ||
| pub const SCP_IP: &str = "10.0.0.25"; // Service Communication Proxy | ||
| pub const SPP_IP: &str = "10.0.0.26"; // Service Producer Proxy | ||
| pub const HSS_IP: &str = "10.0.0.27"; // Home Subscriber Server | ||
| pub const CBC_IP: &str = "10.0.0.28"; // Cell Broadcast Centre | ||
| pub const IWF_IP: &str = "10.0.0.29"; // Interworking Function | ||
| pub const DCCF_IP: &str = "10.0.0.30"; // Data Collection Coordination Function | ||
|
|
||
| // GNB Simulator IP addresses | ||
| pub const RAN1_IP: &str = "10.0.1.0"; // Ran1 | ||
| pub const RAN2_IP: &str = "10.0.1.1"; // Ran2 | ||
| pub const RAN3_IP: &str = "10.0.1.2"; // Ran3 | ||
|
|
||
| // MongoDB | ||
| pub const MONGO_IP: &str = "10.0.100.0"; // MongoDB | ||
|
|
||
| // Import the IP constants from main.rs | ||
| use crate::*; |
There was a problem hiding this comment.
Remove duplicate IP constant definitions
The IP constants are defined here (lines 14-50) and then imported from the crate root (line 53), causing duplicate definitions. According to the AI summary, these constants are already defined in main.rs.
Remove the local definitions and keep only the import:
-// Global constants for 5G Network Functions IP addresses
-pub const AMF_IP: &str = "10.0.0.2"; // Access and Mobility Management Function
-pub const SMF_IP: &str = "10.0.0.3"; // Session Management Function
-// ... (remove all constant definitions through line 50)
-pub const MONGO_IP: &str = "10.0.100.0"; // MongoDB
-
// Import the IP constants from main.rs
use crate::*;📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Global constants for 5G Network Functions IP addresses | |
| pub const AMF_IP: &str = "10.0.0.2"; // Access and Mobility Management Function | |
| pub const SMF_IP: &str = "10.0.0.3"; // Session Management Function | |
| pub const UPF_IP: &str = "10.0.0.4"; // User Plane Function | |
| pub const NRF_IP: &str = "10.0.0.5"; // Network Repository Function | |
| pub const AUSF_IP: &str = "10.0.0.6"; // Authentication Server Function | |
| pub const PCF_IP: &str = "10.0.0.7"; // Policy Control Function | |
| pub const NSSF_IP: &str = "10.0.0.8"; // Network Slice Selection Function | |
| pub const UDM_IP: &str = "10.0.0.9"; // Unified Data Management | |
| pub const UDR_IP: &str = "10.0.0.10"; // Unified Data Repository | |
| pub const BSF_IP: &str = "10.0.0.11"; // Binding Support Function | |
| pub const CHF_IP: &str = "10.0.0.12"; // Charging Function | |
| pub const SMSF_IP: &str = "10.0.0.13"; // Short Message Service Function | |
| pub const N3IWF_IP: &str = "10.0.0.14"; // Non-3GPP Interworking Function | |
| pub const SEPP_IP: &str = "10.0.0.15"; // Security Edge Protection Proxy | |
| pub const NWDAF_IP: &str = "10.0.0.16"; // Network Data Analytics Function | |
| pub const GMLC_IP: &str = "10.0.0.17"; // Gateway Mobile Location Centre | |
| pub const SCEF_IP: &str = "10.0.0.18"; // Service Capability Exposure Function | |
| pub const EIR_IP: &str = "10.0.0.19"; // Equipment Identity Register | |
| pub const UDSF_IP: &str = "10.0.0.20"; // Unstructured Data Storage Function | |
| pub const LMF_IP: &str = "10.0.0.21"; // Location Management Function | |
| pub const MBSF_IP: &str = "10.0.0.22"; // Multicast Broadcast Service Function | |
| pub const NAF_IP: &str = "10.0.0.23"; // Network Application Function | |
| pub const NEF_IP: &str = "10.0.0.24"; // Network Exposure Function | |
| pub const SCP_IP: &str = "10.0.0.25"; // Service Communication Proxy | |
| pub const SPP_IP: &str = "10.0.0.26"; // Service Producer Proxy | |
| pub const HSS_IP: &str = "10.0.0.27"; // Home Subscriber Server | |
| pub const CBC_IP: &str = "10.0.0.28"; // Cell Broadcast Centre | |
| pub const IWF_IP: &str = "10.0.0.29"; // Interworking Function | |
| pub const DCCF_IP: &str = "10.0.0.30"; // Data Collection Coordination Function | |
| // GNB Simulator IP addresses | |
| pub const RAN1_IP: &str = "10.0.1.0"; // Ran1 | |
| pub const RAN2_IP: &str = "10.0.1.1"; // Ran2 | |
| pub const RAN3_IP: &str = "10.0.1.2"; // Ran3 | |
| // MongoDB | |
| pub const MONGO_IP: &str = "10.0.100.0"; // MongoDB | |
| // Import the IP constants from main.rs | |
| use crate::*; | |
| // Import the IP constants from main.rs | |
| use crate::*; |
🤖 Prompt for AI Agents
In tests/integration/src/tester.rs from lines 13 to 53, the IP constants are
redundantly defined locally and also imported from the crate root, causing
duplicate definitions. Remove all the local constant definitions between lines
14 and 50, and keep only the import statement on line 53 to use the constants
defined in main.rs.
| pub fn get_nf_name_by_ip(ip: &str) -> &'static str { | ||
| match ip { | ||
| AMF_IP => "AMF", | ||
| SMF_IP => "SMF", | ||
| UPF_IP => "UPF", | ||
| NRF_IP => "NRF", | ||
| AUSF_IP => "AUSF", | ||
| PCF_IP => "PCF", | ||
| NSSF_IP => "NSSF", | ||
| UDM_IP => "UDM", | ||
| UDR_IP => "UDR", | ||
| BSF_IP => "BSF", | ||
| CHF_IP => "CHF", | ||
| SMSF_IP => "SMSF", | ||
| N3IWF_IP => "N3IWF", | ||
| SEPP_IP => "SEPP", | ||
| NWDAF_IP => "NWDAF", | ||
| GMLC_IP => "GMLC", | ||
| SCEF_IP => "SCEF", | ||
| EIR_IP => "EIR", | ||
| UDSF_IP => "UDSF", | ||
| LMF_IP => "LMF", | ||
| MBSF_IP => "MBSF", | ||
| NAF_IP => "NAF", | ||
| NEF_IP => "NEF", | ||
| SCP_IP => "SCP", | ||
| SPP_IP => "SPP", | ||
| HSS_IP => "HSS", | ||
| CBC_IP => "CBC", | ||
| IWF_IP => "IWF", | ||
| DCCF_IP => "DCCF", | ||
| RAN1_IP => "RAN1", | ||
| RAN2_IP => "RAN2", | ||
| RAN3_IP => "RAN3", | ||
| _ => "UNKNOWN", | ||
| } | ||
| } |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Add missing MONGO_IP mapping
The function is missing the MONGO_IP mapping, which will cause MongoDB packets to be labeled as "UNKNOWN".
RAN1_IP => "RAN1",
RAN2_IP => "RAN2",
RAN3_IP => "RAN3",
+ MONGO_IP => "MongoDB",
_ => "UNKNOWN",📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| pub fn get_nf_name_by_ip(ip: &str) -> &'static str { | |
| match ip { | |
| AMF_IP => "AMF", | |
| SMF_IP => "SMF", | |
| UPF_IP => "UPF", | |
| NRF_IP => "NRF", | |
| AUSF_IP => "AUSF", | |
| PCF_IP => "PCF", | |
| NSSF_IP => "NSSF", | |
| UDM_IP => "UDM", | |
| UDR_IP => "UDR", | |
| BSF_IP => "BSF", | |
| CHF_IP => "CHF", | |
| SMSF_IP => "SMSF", | |
| N3IWF_IP => "N3IWF", | |
| SEPP_IP => "SEPP", | |
| NWDAF_IP => "NWDAF", | |
| GMLC_IP => "GMLC", | |
| SCEF_IP => "SCEF", | |
| EIR_IP => "EIR", | |
| UDSF_IP => "UDSF", | |
| LMF_IP => "LMF", | |
| MBSF_IP => "MBSF", | |
| NAF_IP => "NAF", | |
| NEF_IP => "NEF", | |
| SCP_IP => "SCP", | |
| SPP_IP => "SPP", | |
| HSS_IP => "HSS", | |
| CBC_IP => "CBC", | |
| IWF_IP => "IWF", | |
| DCCF_IP => "DCCF", | |
| RAN1_IP => "RAN1", | |
| RAN2_IP => "RAN2", | |
| RAN3_IP => "RAN3", | |
| _ => "UNKNOWN", | |
| } | |
| } | |
| pub fn get_nf_name_by_ip(ip: &str) -> &'static str { | |
| match ip { | |
| AMF_IP => "AMF", | |
| SMF_IP => "SMF", | |
| UPF_IP => "UPF", | |
| NRF_IP => "NRF", | |
| AUSF_IP => "AUSF", | |
| PCF_IP => "PCF", | |
| NSSF_IP => "NSSF", | |
| UDM_IP => "UDM", | |
| UDR_IP => "UDR", | |
| BSF_IP => "BSF", | |
| CHF_IP => "CHF", | |
| SMSF_IP => "SMSF", | |
| N3IWF_IP => "N3IWF", | |
| SEPP_IP => "SEPP", | |
| NWDAF_IP => "NWDAF", | |
| GMLC_IP => "GMLC", | |
| SCEF_IP => "SCEF", | |
| EIR_IP => "EIR", | |
| UDSF_IP => "UDSF", | |
| LMF_IP => "LMF", | |
| MBSF_IP => "MBSF", | |
| NAF_IP => "NAF", | |
| NEF_IP => "NEF", | |
| SCP_IP => "SCP", | |
| SPP_IP => "SPP", | |
| HSS_IP => "HSS", | |
| CBC_IP => "CBC", | |
| IWF_IP => "IWF", | |
| DCCF_IP => "DCCF", | |
| RAN1_IP => "RAN1", | |
| RAN2_IP => "RAN2", | |
| RAN3_IP => "RAN3", | |
| MONGO_IP => "MongoDB", | |
| _ => "UNKNOWN", | |
| } | |
| } |
🤖 Prompt for AI Agents
In tests/integration/src/tester.rs between lines 84 and 120, the
get_nf_name_by_ip function lacks a mapping for MONGO_IP, causing MongoDB packets
to be labeled as "UNKNOWN". Add a match arm for MONGO_IP that returns the string
"MONGO" to correctly identify MongoDB packets.
| /// Check if an IP address is a known network function | ||
| pub fn is_known_nf_ip(ip: &str) -> bool { | ||
| matches!(ip, | ||
| AMF_IP | SMF_IP | UPF_IP | NRF_IP | AUSF_IP | PCF_IP | NSSF_IP | UDM_IP | UDR_IP | | ||
| BSF_IP | CHF_IP | SMSF_IP | N3IWF_IP | SEPP_IP | NWDAF_IP | GMLC_IP | SCEF_IP | | ||
| EIR_IP | UDSF_IP | LMF_IP | MBSF_IP | NAF_IP | NEF_IP | SCP_IP | SPP_IP | | ||
| HSS_IP | CBC_IP | IWF_IP | DCCF_IP | RAN1_IP | RAN2_IP | RAN3_IP | ||
| ) | ||
| } |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Add missing MONGO_IP to known IPs list
The function is missing MONGO_IP, which means MongoDB traffic won't be included in the filtered packet analysis.
matches!(ip,
AMF_IP | SMF_IP | UPF_IP | NRF_IP | AUSF_IP | PCF_IP | NSSF_IP | UDM_IP | UDR_IP |
BSF_IP | CHF_IP | SMSF_IP | N3IWF_IP | SEPP_IP | NWDAF_IP | GMLC_IP | SCEF_IP |
EIR_IP | UDSF_IP | LMF_IP | MBSF_IP | NAF_IP | NEF_IP | SCP_IP | SPP_IP |
- HSS_IP | CBC_IP | IWF_IP | DCCF_IP | RAN1_IP | RAN2_IP | RAN3_IP
+ HSS_IP | CBC_IP | IWF_IP | DCCF_IP | RAN1_IP | RAN2_IP | RAN3_IP | MONGO_IP
)🤖 Prompt for AI Agents
In tests/integration/src/tester.rs around lines 178 to 186, the function
is_known_nf_ip does not include MONGO_IP in its list of known IPs. Add MONGO_IP
to the matches! macro so that MongoDB traffic is correctly recognized as a known
network function IP.
| /// Process and filter packets based on source and destination | ||
| pub fn process_filtered_packets(capture: &mut pcap::Capture<pcap::Active>, duration: std::time::Duration) -> Result<()> { | ||
| println!("🔍 Starting filtered packet analysis..."); | ||
| println!("📡 Monitoring traffic between 5G Network Functions..."); | ||
| println!("{}", "=".repeat(80)); | ||
|
|
||
| let start_time = std::time::Instant::now(); | ||
| let mut packet_count = 0; | ||
| let mut filtered_count = 0; | ||
|
|
||
| // Statistics tracking | ||
| let mut flow_stats: HashMap<String, usize> = HashMap::new(); | ||
|
|
||
| while start_time.elapsed() < duration { | ||
| match capture.next_packet() { | ||
| Ok(packet) => { | ||
| packet_count += 1; | ||
|
|
||
| if let Some(flow) = analyze_packet(&packet)? { | ||
| filtered_count += 1; | ||
| print_packet_flow(&flow, filtered_count); | ||
|
|
||
| // Track flow statistics | ||
| let flow_key = format!("{} → {}", flow.source_name, flow.destination_name); | ||
| *flow_stats.entry(flow_key).or_insert(0) += 1; | ||
| } | ||
| } | ||
| Err(pcap::Error::TimeoutExpired) => { | ||
| // Timeout is expected, continue | ||
| continue; | ||
| } | ||
| Err(e) => { | ||
| eprintln!("❌ Error capturing packet: {:?}", e); | ||
| break; | ||
| } | ||
| } | ||
| } | ||
|
|
||
| // Print summary statistics | ||
| println!("{}", "=".repeat(80)); | ||
| println!("📊 Packet Analysis Summary:"); | ||
| println!(" Total packets captured: {}", packet_count); | ||
| println!(" Filtered packets (NF traffic): {}", filtered_count); | ||
| println!(" Filter rate: {:.2}%", (filtered_count as f64 / packet_count as f64) * 100.0); | ||
|
|
||
| if !flow_stats.is_empty() { | ||
| println!("\n🔄 Top Network Function Flows:"); | ||
| let mut sorted_flows: Vec<_> = flow_stats.iter().collect(); | ||
| sorted_flows.sort_by(|a, b| b.1.cmp(a.1)); | ||
|
|
||
| for (flow, count) in sorted_flows.iter().take(10) { | ||
| println!(" {}: {} packets", flow, count); | ||
| } | ||
| } | ||
|
|
||
| println!("✅ Packet analysis completed!"); | ||
| Ok(()) | ||
| } |
There was a problem hiding this comment.
Fix potential division by zero error
The filter rate calculation on line 293 can cause a division by zero error if no packets are captured.
- println!(" Filter rate: {:.2}%", (filtered_count as f64 / packet_count as f64) * 100.0);
+ println!(" Filter rate: {:.2}%",
+ if packet_count > 0 {
+ (filtered_count as f64 / packet_count as f64) * 100.0
+ } else {
+ 0.0
+ });🤖 Prompt for AI Agents
In tests/integration/src/tester.rs around lines 250 to 307, the filter rate
calculation divides filtered_count by packet_count without checking if
packet_count is zero, which can cause a division by zero error. Fix this by
adding a conditional check before the division to ensure packet_count is greater
than zero; if it is zero, handle the case gracefully by either skipping the
calculation or setting the filter rate to zero.
| #!/usr/bin/env python3 | ||
|
|
||
| import os | ||
| import logging | ||
| import time | ||
| import shlex | ||
| from typing import Dict, Set, Optional, Any | ||
| from datetime import datetime, timezone | ||
| from scapy.all import rdpcap, IP, TCP, UDP, SCTP | ||
| from scapy.layers.http import HTTP, HTTPRequest, HTTPResponse | ||
| import argparse | ||
| from watchdog.observers import Observer | ||
| from watchdog.events import FileSystemEventHandler | ||
| from scapy.utils import wrpcap | ||
| from scapy.contrib.http2 import * | ||
| from scapy.config import conf | ||
| from h2_decoder import HTTP2Decoder | ||
|
|
||
| conf.use_pcap = True | ||
|
|
There was a problem hiding this comment.
🛠️ Refactor suggestion
Remove unused imports and fix wildcard import
The static analysis correctly identifies unused imports and a problematic wildcard import.
-from scapy.all import rdpcap, IP, TCP, UDP, SCTP
-from scapy.layers.http import HTTP, HTTPRequest, HTTPResponse
+from scapy.all import rdpcap, IP, TCP, UDP, SCTP, HTTPRequest, HTTPResponse
import argparse
from watchdog.observers import Observer
from watchdog.events import FileSystemEventHandler
-from scapy.utils import wrpcap
-from scapy.contrib.http2 import *
from scapy.config import conf
from h2_decoder import HTTP2Decoder📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| #!/usr/bin/env python3 | |
| import os | |
| import logging | |
| import time | |
| import shlex | |
| from typing import Dict, Set, Optional, Any | |
| from datetime import datetime, timezone | |
| from scapy.all import rdpcap, IP, TCP, UDP, SCTP | |
| from scapy.layers.http import HTTP, HTTPRequest, HTTPResponse | |
| import argparse | |
| from watchdog.observers import Observer | |
| from watchdog.events import FileSystemEventHandler | |
| from scapy.utils import wrpcap | |
| from scapy.contrib.http2 import * | |
| from scapy.config import conf | |
| from h2_decoder import HTTP2Decoder | |
| conf.use_pcap = True | |
| #!/usr/bin/env python3 | |
| import os | |
| import logging | |
| import time | |
| import shlex | |
| from typing import Dict, Set, Optional, Any | |
| from datetime import datetime, timezone | |
| from scapy.all import rdpcap, IP, TCP, UDP, SCTP, HTTPRequest, HTTPResponse | |
| import argparse | |
| from watchdog.observers import Observer | |
| from watchdog.events import FileSystemEventHandler | |
| from scapy.config import conf | |
| from h2_decoder import HTTP2Decoder | |
| conf.use_pcap = True |
🧰 Tools
🪛 Ruff (0.12.2)
10-10: scapy.layers.http.HTTP imported but unused
Remove unused import: scapy.layers.http.HTTP
(F401)
14-14: scapy.utils.wrpcap imported but unused
Remove unused import: scapy.utils.wrpcap
(F401)
15-15: from scapy.contrib.http2 import * used; unable to detect undefined names
(F403)
🤖 Prompt for AI Agents
In tests/integration/sniffer/capturer.py lines 1 to 20, remove all imports that
are not used anywhere in the file to clean up the code. Replace the wildcard
import from scapy.contrib.http2 with explicit imports of only the necessary
classes or functions used in the code to avoid namespace pollution and improve
clarity.
| def init_mongo_collection(): | ||
| from pymongo import MongoClient | ||
| from pymongo.errors import CollectionInvalid | ||
|
|
||
| global mongo_collection | ||
| mongo_client = MongoClient(MONGO_URI) | ||
| mongo_db = mongo_client["integration-tests"] | ||
| # Ensure collection exists | ||
| try: | ||
| mongo_db.create_collection("packet_analysis") | ||
| except CollectionInvalid: | ||
| pass # Collection already exists | ||
| mongo_collection = mongo_db["packet_analysis"] |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Add MongoDB connection error handling
The MongoDB connection could fail without proper error handling.
def init_mongo_collection():
- from pymongo import MongoClient
+ from pymongo import MongoClient, errors
from pymongo.errors import CollectionInvalid
global mongo_collection
- mongo_client = MongoClient(MONGO_URI)
- mongo_db = mongo_client["integration-tests"]
+ try:
+ mongo_client = MongoClient(MONGO_URI, serverSelectionTimeoutMS=5000)
+ # Test connection
+ mongo_client.server_info()
+ mongo_db = mongo_client["integration-tests"]
+ except errors.ServerSelectionTimeoutError as e:
+ logging.error(f"Failed to connect to MongoDB at {MONGO_URI}: {e}")
+ mongo_collection = None
+ return📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| def init_mongo_collection(): | |
| from pymongo import MongoClient | |
| from pymongo.errors import CollectionInvalid | |
| global mongo_collection | |
| mongo_client = MongoClient(MONGO_URI) | |
| mongo_db = mongo_client["integration-tests"] | |
| # Ensure collection exists | |
| try: | |
| mongo_db.create_collection("packet_analysis") | |
| except CollectionInvalid: | |
| pass # Collection already exists | |
| mongo_collection = mongo_db["packet_analysis"] | |
| def init_mongo_collection(): | |
| from pymongo import MongoClient, errors | |
| from pymongo.errors import CollectionInvalid | |
| global mongo_collection | |
| try: | |
| mongo_client = MongoClient(MONGO_URI, serverSelectionTimeoutMS=5000) | |
| # Test connection | |
| mongo_client.server_info() | |
| mongo_db = mongo_client["integration-tests"] | |
| except errors.ServerSelectionTimeoutError as e: | |
| logging.error(f"Failed to connect to MongoDB at {MONGO_URI}: {e}") | |
| mongo_collection = None | |
| return | |
| # Ensure collection exists | |
| try: | |
| mongo_db.create_collection("packet_analysis") | |
| except CollectionInvalid: | |
| pass # Collection already exists | |
| mongo_collection = mongo_db["packet_analysis"] |
🧰 Tools
🪛 Ruff (0.12.2)
97-100: Use contextlib.suppress(CollectionInvalid) instead of try-except-pass
(SIM105)
🤖 Prompt for AI Agents
In tests/integration/sniffer/capturer.py around lines 89 to 101, the MongoDB
connection setup lacks error handling for connection failures. Wrap the
MongoClient initialization and subsequent database/collection operations in a
try-except block to catch exceptions like pymongo.errors.ConnectionFailure or
general exceptions. Log or handle the error appropriately to prevent unhandled
exceptions and provide clear feedback if the connection fails.
| # --- Unified process_packet --- | ||
| def process_packet(packet) -> None: | ||
| if not packet.haslayer(IP): | ||
| return | ||
| ip_layer = packet[IP] | ||
| src_ip = ip_layer.src | ||
| dst_ip = ip_layer.dst | ||
| # Check if packet involves target IPs | ||
| if src_ip not in TARGET_IPS and dst_ip not in TARGET_IPS: | ||
| return | ||
| # Check if packet involves ignored IPs | ||
| if src_ip in IGNORE_IP_TO_ALIAS or dst_ip in IGNORE_IP_TO_ALIAS: | ||
| return | ||
| # Analyze packet based on protocol | ||
| if packet.haslayer(SCTP): | ||
| sctp_layer = packet[SCTP] | ||
| payload = ( | ||
| bytes(sctp_layer.payload) | ||
| if hasattr(sctp_layer, "payload") and sctp_layer.payload | ||
| else b"" | ||
| ) | ||
| format_and_log_payload( | ||
| "SCTP", src_ip, dst_ip, sctp_layer.sport, sctp_layer.dport, payload | ||
| ) | ||
| process_sctp_packet_store( | ||
| packet, src_ip, dst_ip, sctp_layer.sport, sctp_layer.dport, payload | ||
| ) | ||
| elif packet.haslayer(TCP): | ||
| tcp_layer = packet[TCP] | ||
| payload = ( | ||
| bytes(tcp_layer.payload) | ||
| if hasattr(tcp_layer, "payload") and tcp_layer.payload | ||
| else b"" | ||
| ) | ||
| format_and_log_payload( | ||
| "TCP", src_ip, dst_ip, tcp_layer.sport, tcp_layer.dport, payload | ||
| ) | ||
| # Heuristic: HTTP/2 uses magic preface or :method header, HTTP/1 uses GET/POST/PUT/HTTP/ | ||
| if payload.startswith(b"PRI * HTTP/2.0") or b":method" in payload: | ||
| process_tcp_packet_http2( | ||
| packet, src_ip, dst_ip, tcp_layer.sport, tcp_layer.dport, payload | ||
| ) | ||
| elif any( | ||
| payload.startswith(method.encode() + b" ") for method in HTTP_METHODS | ||
| ) or payload.startswith(b"HTTP/"): | ||
| process_tcp_packet_http1( | ||
| packet, src_ip, dst_ip, tcp_layer.sport, tcp_layer.dport, payload | ||
| ) | ||
| else: | ||
| # Try both, fallback to HTTP/1 | ||
| try: | ||
| process_tcp_packet_http2( | ||
| packet, src_ip, dst_ip, tcp_layer.sport, tcp_layer.dport, payload | ||
| ) | ||
| except Exception as e: | ||
| logging.exception(f"Error processing packet as HTTP/2: {e}") | ||
| import traceback | ||
|
|
||
| logging.error(f"Exception traceback:\n{traceback.format_exc()}") | ||
| process_tcp_packet_http1( | ||
| packet, src_ip, dst_ip, tcp_layer.sport, tcp_layer.dport, payload | ||
| ) | ||
| elif packet.haslayer(UDP): | ||
| udp_layer = packet[UDP] | ||
| payload = ( | ||
| bytes(udp_layer.payload) | ||
| if hasattr(udp_layer, "payload") and udp_layer.payload | ||
| else b"" | ||
| ) | ||
| format_and_log_payload( | ||
| "UDP", src_ip, dst_ip, udp_layer.sport, udp_layer.dport, payload | ||
| ) | ||
| result = analyze_udp_packet(packet) | ||
| elif packet.haslayer(TCP): | ||
| result = analyze_tcp_packet(packet) | ||
|
|
There was a problem hiding this comment.
Fix unreachable code and remove unused variable
There's duplicate TCP layer checking - the second elif packet.haslayer(TCP) on line 642 is unreachable. Also, the result variable is assigned but never used.
elif packet.haslayer(UDP):
udp_layer = packet[UDP]
payload = (
bytes(udp_layer.payload)
if hasattr(udp_layer, "payload") and udp_layer.payload
else b""
)
format_and_log_payload(
"UDP", src_ip, dst_ip, udp_layer.sport, udp_layer.dport, payload
)
- result = analyze_udp_packet(packet)
- elif packet.haslayer(TCP):
- result = analyze_tcp_packet(packet)
+ # UDP packets are logged but not stored/processed further📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| # --- Unified process_packet --- | |
| def process_packet(packet) -> None: | |
| if not packet.haslayer(IP): | |
| return | |
| ip_layer = packet[IP] | |
| src_ip = ip_layer.src | |
| dst_ip = ip_layer.dst | |
| # Check if packet involves target IPs | |
| if src_ip not in TARGET_IPS and dst_ip not in TARGET_IPS: | |
| return | |
| # Check if packet involves ignored IPs | |
| if src_ip in IGNORE_IP_TO_ALIAS or dst_ip in IGNORE_IP_TO_ALIAS: | |
| return | |
| # Analyze packet based on protocol | |
| if packet.haslayer(SCTP): | |
| sctp_layer = packet[SCTP] | |
| payload = ( | |
| bytes(sctp_layer.payload) | |
| if hasattr(sctp_layer, "payload") and sctp_layer.payload | |
| else b"" | |
| ) | |
| format_and_log_payload( | |
| "SCTP", src_ip, dst_ip, sctp_layer.sport, sctp_layer.dport, payload | |
| ) | |
| process_sctp_packet_store( | |
| packet, src_ip, dst_ip, sctp_layer.sport, sctp_layer.dport, payload | |
| ) | |
| elif packet.haslayer(TCP): | |
| tcp_layer = packet[TCP] | |
| payload = ( | |
| bytes(tcp_layer.payload) | |
| if hasattr(tcp_layer, "payload") and tcp_layer.payload | |
| else b"" | |
| ) | |
| format_and_log_payload( | |
| "TCP", src_ip, dst_ip, tcp_layer.sport, tcp_layer.dport, payload | |
| ) | |
| # Heuristic: HTTP/2 uses magic preface or :method header, HTTP/1 uses GET/POST/PUT/HTTP/ | |
| if payload.startswith(b"PRI * HTTP/2.0") or b":method" in payload: | |
| process_tcp_packet_http2( | |
| packet, src_ip, dst_ip, tcp_layer.sport, tcp_layer.dport, payload | |
| ) | |
| elif any( | |
| payload.startswith(method.encode() + b" ") for method in HTTP_METHODS | |
| ) or payload.startswith(b"HTTP/"): | |
| process_tcp_packet_http1( | |
| packet, src_ip, dst_ip, tcp_layer.sport, tcp_layer.dport, payload | |
| ) | |
| else: | |
| # Try both, fallback to HTTP/1 | |
| try: | |
| process_tcp_packet_http2( | |
| packet, src_ip, dst_ip, tcp_layer.sport, tcp_layer.dport, payload | |
| ) | |
| except Exception as e: | |
| logging.exception(f"Error processing packet as HTTP/2: {e}") | |
| import traceback | |
| logging.error(f"Exception traceback:\n{traceback.format_exc()}") | |
| process_tcp_packet_http1( | |
| packet, src_ip, dst_ip, tcp_layer.sport, tcp_layer.dport, payload | |
| ) | |
| elif packet.haslayer(UDP): | |
| udp_layer = packet[UDP] | |
| payload = ( | |
| bytes(udp_layer.payload) | |
| if hasattr(udp_layer, "payload") and udp_layer.payload | |
| else b"" | |
| ) | |
| format_and_log_payload( | |
| "UDP", src_ip, dst_ip, udp_layer.sport, udp_layer.dport, payload | |
| ) | |
| result = analyze_udp_packet(packet) | |
| elif packet.haslayer(TCP): | |
| result = analyze_tcp_packet(packet) | |
| elif packet.haslayer(UDP): | |
| udp_layer = packet[UDP] | |
| payload = ( | |
| bytes(udp_layer.payload) | |
| if hasattr(udp_layer, "payload") and udp_layer.payload | |
| else b"" | |
| ) | |
| format_and_log_payload( | |
| "UDP", src_ip, dst_ip, udp_layer.sport, udp_layer.dport, payload | |
| ) | |
| # UDP packets are logged but not stored/processed further |
🧰 Tools
🪛 Ruff (0.12.2)
643-643: Local variable result is assigned to but never used
Remove assignment to unused variable result
(F841)
🤖 Prompt for AI Agents
In tests/integration/sniffer/capturer.py between lines 569 and 644, remove the
duplicate `elif packet.haslayer(TCP)` block starting at line 642 since it is
unreachable due to the earlier TCP check. Also, eliminate the unused `result`
variable assignments in both the UDP and the duplicate TCP blocks to clean up
the code.
Docstrings generation was requested by @pranav-kumar-hex. * #5 (comment) The following files were modified: * `lightning-cli/src/nf_type.rs` * `lightning-nf/omnipath/app/src/builder/sbi/nrf.rs` * `lightning-nf/omnipath/app/src/config/mod.rs` * `lightning-nf/omnipath/app/src/context/app_context.rs` * `lightning-nf/omnipath/app/src/lib.rs` * `lightning-nf/omnipath/app/src/ngap/engine/controller.rs` * `tests/integration/sniffer/capturer.py` * `tests/integration/sniffer/h2_decoder/connection.py` * `tests/integration/sniffer/h2_decoder/decoder.py` * `tests/integration/sniffer/h2_decoder/stream.py` * `tests/integration/src/main.rs` * `tests/integration/src/tester.rs` * `tests/integration/user-init/create-user.py`
|
Note Generated docstrings for this pull request at #6 |
There was a problem hiding this comment.
cubic analysis
40 issues found across 86 files • Review in cubic
React with 👍 or 👎 to teach cubic. You can also tag @cubic-dev-ai to give feedback, ask questions, or re-run the review.
tests/integration/cert/udr.key
Outdated
| @@ -0,0 +1,27 @@ | |||
| -----BEGIN RSA PRIVATE KEY----- | |||
There was a problem hiding this comment.
A PEM-encoded RSA private key is committed to the repository, exposing sensitive credentials and posing a serious security risk; private keys should be stored securely outside version control or replaced with generated test fixtures.
Prompt for AI agents
Address the following comment on tests/integration/cert/udr.key at line 1:
<comment>A PEM-encoded RSA private key is committed to the repository, exposing sensitive credentials and posing a serious security risk; private keys should be stored securely outside version control or replaced with generated test fixtures.</comment>
<file context>
@@ -0,0 +1,27 @@
+-----BEGIN RSA PRIVATE KEY-----
+MIIEpAIBAAKCAQEAyL2vLO4vrQu2gGMsnjJ+ECkB0ThuA4CmhEktmhZx7uQCrDoX
+JG+qNgA5Xx5g/iM8otHB14X4CQB0ghjDSlw6z/M+eCL6fQaqjFkhrXj+LPeqNnWU
</file context>
tests/integration/cert/udm.key
Outdated
| @@ -0,0 +1,27 @@ | |||
| -----BEGIN RSA PRIVATE KEY----- | |||
There was a problem hiding this comment.
An un-encrypted RSA private key is committed to the repository, exposing credentials and creating a security liability; keys should be generated at test time or injected via a secret manager instead.
Prompt for AI agents
Address the following comment on tests/integration/cert/udm.key at line 1:
<comment>An un-encrypted RSA private key is committed to the repository, exposing credentials and creating a security liability; keys should be generated at test time or injected via a secret manager instead.</comment>
<file context>
@@ -0,0 +1,27 @@
+-----BEGIN RSA PRIVATE KEY-----
+MIIEowIBAAKCAQEAo544dewlx0hksbXlgVVw0kZ2xdUehMDlgwQaQQOO8tntQ3lG
+H+3UeI2K8QUjrc9DlqkEkzLz7yLQDIFtXV063DWlH521xOTCbsU3iYjb+pZ0uRLV
</file context>
tests/integration/cert/smf.key
Outdated
| @@ -0,0 +1,27 @@ | |||
| -----BEGIN RSA PRIVATE KEY----- | |||
There was a problem hiding this comment.
A plain RSA private key is being committed to the repository, exposing sensitive credentials and creating a serious security risk.
Prompt for AI agents
Address the following comment on tests/integration/cert/smf.key at line 1:
<comment>A plain RSA private key is being committed to the repository, exposing sensitive credentials and creating a serious security risk.</comment>
<file context>
@@ -0,0 +1,27 @@
+-----BEGIN RSA PRIVATE KEY-----
+MIIEowIBAAKCAQEA3SU3yVrp5Ig9aRFh3OWtvydxJC/ZzA8zijrSRldrCMEhCPsX
+C6htxPiolyYfX95ooG3lUhU8KHunQ6RkBqYtxKHrk/uSeN6iDOxv2d4cp7MtTlHn
</file context>
| port: 8000 # port used to bind the service | ||
| tls: # the local path of TLS key | ||
| pem: cert/omnipath.pem # AMF TLS Certificate | ||
| key: cert/omnipath.key # AMF TLS Private key |
There was a problem hiding this comment.
Private TLS key is stored in the repository, risking key leakage and compromise of TLS sessions.
Prompt for AI agents
Address the following comment on tests/integration/config/omnipathcfg.yaml at line 12:
<comment>Private TLS key is stored in the repository, risking key leakage and compromise of TLS sessions.</comment>
<file context>
@@ -0,0 +1,130 @@
+info:
+ version: 1.0.0
+ description: AMF initial local configuration
+
+sbi: # Service-based interface information
+ scheme: http # the protocol for sbi (http or https)
+ registerIp: omnipath.unifyair.com # IP used to register to NRF
+ bindingIp: omnipath.unifyair.com # IP used to bind the service
+ port: 8000 # port used to bind the service
</file context>
| XfrmiName: ipsec # prefix of XFRM interface created | ||
| GreIfaceName: gretun # prefix of GRE interface created | ||
| IkeSaSPI: 0x000000000006f708 # IKE SA SPI (hex string, 8 octets) | ||
| IPSecSA3gppControlPlaneSPI: 0x00000002 # IPsec/Child SA SPI for 3GPP Control Plane (hex string, 4 octets) |
There was a problem hiding this comment.
IPSecSA3gppControlPlaneSPI should be quoted to preserve full 4-octet hex representation
Prompt for AI agents
Address the following comment on tests/integration/config/n3uecfg.yaml at line 22:
<comment>IPSecSA3gppControlPlaneSPI should be quoted to preserve full 4-octet hex representation</comment>
<file context>
@@ -0,0 +1,42 @@
+info:
+ version: 1.0.1
+ description: Non-3GPP UE configuration
+configuration:
+ N3IWFInformation:
+ IPSecIfaceAddr: 10.100.200.15 # IP address of Nwu interface (IKE) on N3IWF
+ IPsecInnerAddr: 10.0.0.1 # IP address of IPsec tunnel enpoint on N3IWF
+ N3UEInformation:
+ IMSI:
</file context>
| IPSecSA3gppControlPlaneSPI: 0x00000002 # IPsec/Child SA SPI for 3GPP Control Plane (hex string, 4 octets) | |
| IPSecSA3gppControlPlaneSPI: "0x00000002" # IPsec/Child SA SPI for 3GPP Control Plane (hex string, 4 octets) |
| maxRetryTimes: 4 # the max number of retransmission | ||
| locality: area1 # Name of the location where a set of AMF, SMF, PCF and UPFs are located | ||
| sctp: # set the sctp server setting <optinal>, once this field is set, please also add maxInputStream, maxOsStream, maxAttempts, maxInitTimeOut | ||
| numOstreams: 3 # the maximum out streams of each sctp connection |
There was a problem hiding this comment.
Parameter names numOstreams and maxInstreams do not match the naming convention referenced in the comment, risking the settings being silently ignored.
Prompt for AI agents
Address the following comment on tests/integration/config/omnipathcfg.yaml at line 118:
<comment>Parameter names numOstreams and maxInstreams do not match the naming convention referenced in the comment, risking the settings being silently ignored.</comment>
<file context>
@@ -0,0 +1,130 @@
+info:
+ version: 1.0.0
+ description: AMF initial local configuration
+
+sbi: # Service-based interface information
+ scheme: http # the protocol for sbi (http or https)
+ registerIp: omnipath.unifyair.com # IP used to register to NRF
+ bindingIp: omnipath.unifyair.com # IP used to bind the service
+ port: 8000 # port used to bind the service
</file context>
| - DNN: internet2 | ||
| SNSSAI: # Single Network Slice Selection Assistance Information | ||
| SST: 1 # Slice/Service Type (1 byte hex string, range: 0~F) | ||
| SD: 112233 # Slice Differentiator (3 bytes hex string, range: 000000~FFFFFF) |
There was a problem hiding this comment.
SD should be quoted to remain a hexadecimal string rather than an integer
Prompt for AI agents
Address the following comment on tests/integration/config/n3uecfg.yaml at line 31:
<comment>SD should be quoted to remain a hexadecimal string rather than an integer</comment>
<file context>
@@ -0,0 +1,42 @@
+info:
+ version: 1.0.1
+ description: Non-3GPP UE configuration
+configuration:
+ N3IWFInformation:
+ IPSecIfaceAddr: 10.100.200.15 # IP address of Nwu interface (IKE) on N3IWF
+ IPsecInnerAddr: 10.0.0.1 # IP address of IPsec tunnel enpoint on N3IWF
+ N3UEInformation:
+ IMSI:
</file context>
| SD: 112233 # Slice Differentiator (3 bytes hex string, range: 000000~FFFFFF) | |
| SD: "112233" # Slice Differentiator (3 bytes hex string, range: 000000~FFFFFF) |
|
|
||
| fn cleanup_docker_resources() -> Result<()> { | ||
| // Stop and remove any existing containers | ||
| let _ = Command::new("docker-compose") |
There was a problem hiding this comment.
Inconsistent use of "docker-compose" vs "docker compose" can cause portability issues on systems where only one of them is available.
Prompt for AI agents
Address the following comment on tests/integration/src/main.rs at line 79:
<comment>Inconsistent use of "docker-compose" vs "docker compose" can cause portability issues on systems where only one of them is available.</comment>
<file context>
@@ -0,0 +1,205 @@
+use std::{path::Path, process::Command, time::Duration};
+
+use anyhow::{Context, Result};
+use pcap::{Active, Capture, Device};
+
+// Import the tester module
+mod tester;
+use tester::*;
+
</file context>
| SmPolicy: # DNN and Network Slice Requested by N3UE | ||
| - DNN: internet | ||
| SNSSAI: # Single Network Slice Selection Assistance Information | ||
| SST: 1 # Slice/Service Type (1 byte hex string, range: 0~F) |
There was a problem hiding this comment.
SST should be quoted to preserve hexadecimal semantics specified in the comment
Prompt for AI agents
Address the following comment on tests/integration/config/n3uecfg.yaml at line 26:
<comment>SST should be quoted to preserve hexadecimal semantics specified in the comment</comment>
<file context>
@@ -0,0 +1,42 @@
+info:
+ version: 1.0.1
+ description: Non-3GPP UE configuration
+configuration:
+ N3IWFInformation:
+ IPSecIfaceAddr: 10.100.200.15 # IP address of Nwu interface (IKE) on N3IWF
+ IPsecInnerAddr: 10.0.0.1 # IP address of IPsec tunnel enpoint on N3IWF
+ N3UEInformation:
+ IMSI:
</file context>
| SST: 1 # Slice/Service Type (1 byte hex string, range: 0~F) | |
| SST: "1" # Slice/Service Type (1 byte hex string, range: 0~F) |
tests/integration/sniffer/Dockerfile
Outdated
| RUN apt-get install -y python3 python3-pip python3.13-venv | ||
| RUN python3 -m venv /opt/venv | ||
| ENV PATH="/opt/venv/bin:$PATH" | ||
| RUN pip install scapy colorama watchdog h2 pymongo |
There was a problem hiding this comment.
Python dependencies are installed without version pinning, reducing build reproducibility and increasing supply-chain risk.
Prompt for AI agents
Address the following comment on tests/integration/sniffer/Dockerfile at line 43:
<comment>Python dependencies are installed without version pinning, reducing build reproducibility and increasing supply-chain risk.</comment>
<file context>
@@ -0,0 +1,46 @@
+# Use a slim Python base image
+# FROM python:3.10-slim-bookworm
+FROM ubuntu:25.10
+
+# Install build dependencies and system packages
+RUN apt-get update && apt-get install -y \
+ libpcap-dev git curl iproute2 supervisor \
+ build-essential cmake \
+ libsystemd-dev \
</file context>
| RUN pip install scapy colorama watchdog h2 pymongo | |
| RUN pip install scapy==2.5.0 colorama==0.4.7 watchdog==4.0.0 h2==5.0.0 pymongo==4.7.0 |
Enhances the integration test suite for improved stability and reliability. This includes granting necessary capabilities to the capturer container, reducing the stop grace period for faster cleanup, and documenting Docker Desktop host networking requirements. Additionally, it suppresses unnecessary logging to reduce noise and corrects the tshark output path.
There was a problem hiding this comment.
Actionable comments posted: 4
🔭 Outside diff range comments (1)
tests/integration/docker-compose.yaml (1)
348-358: Remove trailing blank line and confirm MTU value
- File tests/integration/docker-compose.yaml, line 357: remove the extra blank line at end of file to satisfy YAML linting.
- MTU is set to 65535 (maximum possible). Please confirm if this is intentional for testing large packets, or if it should be changed to a more typical value (e.g., 1500) or a jumbo-frame size (e.g., 9000).
ipam: config: - subnet: 10.0.0.0/16 -Is the MTU of 65535 intentional, or should it be adjusted?
♻️ Duplicate comments (5)
tests/integration/sniffer/capturer.py (5)
1-20: Remove unused imports and fix wildcard importThe static analysis correctly identifies unused imports and a problematic wildcard import.
-from scapy.all import rdpcap, IP, TCP, UDP, SCTP -from scapy.layers.http import HTTP, HTTPRequest, HTTPResponse +from scapy.all import rdpcap, IP, TCP, UDP, SCTP, HTTPRequest, HTTPResponse import argparse from watchdog.observers import Observer from watchdog.events import FileSystemEventHandler -from scapy.utils import wrpcap -from scapy.contrib.http2 import * from scapy.config import conf from h2_decoder import HTTP2Decoder
89-101: Add MongoDB connection error handlingThe MongoDB connection could fail without proper error handling.
def init_mongo_collection(): - from pymongo import MongoClient + from pymongo import MongoClient, errors from pymongo.errors import CollectionInvalid global mongo_collection - mongo_client = MongoClient(MONGO_URI) - mongo_db = mongo_client["integration-tests"] + try: + mongo_client = MongoClient(MONGO_URI, serverSelectionTimeoutMS=5000) + # Test connection + mongo_client.server_info() + mongo_db = mongo_client["integration-tests"] + except errors.ServerSelectionTimeoutError as e: + logging.error(f"Failed to connect to MongoDB at {MONGO_URI}: {e}") + mongo_collection = None + return
314-330: Fix mutable default argumentUsing a mutable default argument can lead to unexpected behavior if the default value is modified.
def store_reconstructed_payload( protocol, src_ip, dst_ip, src_port, dst_port, req_headers=None, path=None, payload=None, request=None, response=None, resp_headers=None, resp_status=None, resp_reason=None, method=None, - additional_data={}, + additional_data=None, ): + if additional_data is None: + additional_data = {}
630-643: Fix unreachable code and remove unused variableThere's duplicate TCP layer checking - the second
elif packet.haslayer(TCP)on line 642 is unreachable. Also, theresultvariable is assigned but never used.elif packet.haslayer(UDP): udp_layer = packet[UDP] payload = ( bytes(udp_layer.payload) if hasattr(udp_layer, "payload") and udp_layer.payload else b"" ) format_and_log_payload( "UDP", src_ip, dst_ip, udp_layer.sport, udp_layer.dport, payload ) - result = analyze_udp_packet(packet) - elif packet.haslayer(TCP): - result = analyze_tcp_packet(packet) + # UDP packets are logged but not stored/processed further
755-807: Make network interface configurableThe network interface "br-unifyair" is hardcoded, which may not work in all environments.
-def build_tshark_command(): +def build_tshark_command(interface="br-unifyair"): """Build the tshark command with the correct capture filter and options.""" # ... existing code ... # Compose the tshark command return [ "tshark", "-i", - "br-unifyair", + interface, "-f", capture_filter,Also update the main function to accept an interface parameter:
parser.add_argument( "--log-level", # ... existing code ... ) + parser.add_argument( + "--interface", + type=str, + default="br-unifyair", + help="Network interface to capture on (default: br-unifyair)", + )
🧹 Nitpick comments (1)
tests/integration/docker-compose.yaml (1)
45-50: Address TODO comments for UPF implementationThe TODO comments indicate important missing functionality:
- The gtp5g kernel module requirement for UPF
- Consideration of an eBPF-based UPF solution
These are critical for proper UPF functionality in the integration test environment.
Would you like me to create issues to track these TODOs or help research eBPF-based UPF alternatives?
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
tests/integration/README.md(1 hunks)tests/integration/docker-compose.yaml(1 hunks)tests/integration/sniffer/capturer.py(1 hunks)tests/integration/sniffer/supervisord.conf(1 hunks)
✅ Files skipped from review due to trivial changes (1)
- tests/integration/sniffer/supervisord.conf
🚧 Files skipped from review as they are similar to previous changes (1)
- tests/integration/README.md
🧰 Additional context used
🧬 Code Graph Analysis (1)
tests/integration/sniffer/capturer.py (1)
tests/integration/sniffer/h2_decoder/decoder.py (2)
HTTP2Decoder(13-380)process_tcp_packet(69-121)
🪛 YAMLlint (1.37.1)
tests/integration/docker-compose.yaml
[error] 5-5: trailing spaces
(trailing-spaces)
[warning] 6-6: too many spaces after colon
(colons)
[warning] 357-357: too many blank lines (1 > 0)
(empty-lines)
🪛 Ruff (0.12.2)
tests/integration/sniffer/capturer.py
10-10: scapy.layers.http.HTTP imported but unused
Remove unused import: scapy.layers.http.HTTP
(F401)
14-14: scapy.utils.wrpcap imported but unused
Remove unused import: scapy.utils.wrpcap
(F401)
15-15: from scapy.contrib.http2 import * used; unable to detect undefined names
(F403)
97-100: Use contextlib.suppress(CollectionInvalid) instead of try-except-pass
(SIM105)
202-202: Local variable e is assigned to but never used
Remove assignment to unused variable e
(F841)
329-329: Do not use mutable data structures for argument defaults
Replace with None; initialize within function
(B006)
558-558: f-string without any placeholders
Remove extraneous f prefix
(F541)
642-642: Local variable result is assigned to but never used
Remove assignment to unused variable result
(F841)
| services: | ||
| capturer: | ||
| build: | ||
| context: ./sniffer | ||
| dockerfile: Dockerfile | ||
| command: sh -c "rm -rf /app/capture/* /app/capture/.* 2>/dev/null || true; touch /app/capture/tshark.out; /usr/bin/supervisord -c ./supervisord.conf" | ||
| container_name: capturer | ||
| volumes: | ||
| - ./capture:/app/capture | ||
| - ./sniffer/capturer.py:/app/capturer.py | ||
| - ./sniffer/h2_decoder:/app/h2_decoder | ||
| - ./sniffer/supervisord.conf:/app/supervisord.conf | ||
| network_mode: host | ||
| # CRITICAL: Add capabilities to allow packet sniffing | ||
| cap_add: | ||
| - NET_RAW | ||
| - NET_ADMIN | ||
| - SYS_ADMIN | ||
| stop_grace_period: 50s | ||
| depends_on: | ||
| - db | ||
| healthcheck: | ||
| test: [ "CMD", "sh", "-c", "ls -A /app/capture | grep -q ." ] | ||
| interval: 10s | ||
| timeout: 5s | ||
| retries: 3 | ||
| start_period: 20s | ||
|
|
There was a problem hiding this comment.
Fix YAML formatting issues
The static analysis identifies formatting issues that should be fixed for consistency.
capturer:
build:
context: ./sniffer
- dockerfile: Dockerfile
- command: sh -c "rm -rf /app/capture/* /app/capture/.* 2>/dev/null || true; touch /app/capture/tshark.out; /usr/bin/supervisord -c ./supervisord.conf"
+ dockerfile: Dockerfile
+ command: sh -c "rm -rf /app/capture/* /app/capture/.* 2>/dev/null || true; touch /app/capture/tshark.out; /usr/bin/supervisord -c ./supervisord.conf"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| services: | |
| capturer: | |
| build: | |
| context: ./sniffer | |
| dockerfile: Dockerfile | |
| command: sh -c "rm -rf /app/capture/* /app/capture/.* 2>/dev/null || true; touch /app/capture/tshark.out; /usr/bin/supervisord -c ./supervisord.conf" | |
| container_name: capturer | |
| volumes: | |
| - ./capture:/app/capture | |
| - ./sniffer/capturer.py:/app/capturer.py | |
| - ./sniffer/h2_decoder:/app/h2_decoder | |
| - ./sniffer/supervisord.conf:/app/supervisord.conf | |
| network_mode: host | |
| # CRITICAL: Add capabilities to allow packet sniffing | |
| cap_add: | |
| - NET_RAW | |
| - NET_ADMIN | |
| - SYS_ADMIN | |
| stop_grace_period: 50s | |
| depends_on: | |
| - db | |
| healthcheck: | |
| test: [ "CMD", "sh", "-c", "ls -A /app/capture | grep -q ." ] | |
| interval: 10s | |
| timeout: 5s | |
| retries: 3 | |
| start_period: 20s | |
| services: | |
| capturer: | |
| build: | |
| context: ./sniffer | |
| dockerfile: Dockerfile | |
| command: sh -c "rm -rf /app/capture/* /app/capture/.* 2>/dev/null || true; touch /app/capture/tshark.out; /usr/bin/supervisord -c ./supervisord.conf" | |
| container_name: capturer | |
| volumes: | |
| - ./capture:/app/capture | |
| - ./sniffer/capturer.py:/app/capturer.py | |
| - ./sniffer/h2_decoder:/app/h2_decoder | |
| - ./sniffer/supervisord.conf:/app/supervisord.conf | |
| network_mode: host | |
| # CRITICAL: Add capabilities to allow packet sniffing | |
| cap_add: | |
| - NET_RAW | |
| - NET_ADMIN | |
| - SYS_ADMIN | |
| stop_grace_period: 50s | |
| depends_on: | |
| - db | |
| healthcheck: | |
| test: [ "CMD", "sh", "-c", "ls -A /app/capture | grep -q ." ] | |
| interval: 10s | |
| timeout: 5s | |
| retries: 3 | |
| start_period: 20s |
🧰 Tools
🪛 YAMLlint (1.37.1)
[error] 5-5: trailing spaces
(trailing-spaces)
[warning] 6-6: too many spaces after colon
(colons)
🤖 Prompt for AI Agents
In tests/integration/docker-compose.yaml lines 1 to 28, fix YAML formatting
issues by ensuring consistent indentation, removing trailing spaces, and
aligning keys and values properly. Verify that list items under keys like
cap_add and volumes are correctly indented with two spaces, and that there are
no extra spaces after colons or at line ends to maintain YAML syntax correctness
and consistency.
| healthcheck: | ||
| # TODO: Modify Dockerfile to install curl and check if webui is started | ||
| test: [ "CMD", "true" ] | ||
| interval: 5s | ||
| timeout: 3s | ||
| retries: 5 | ||
| start_period: 10s | ||
|
|
There was a problem hiding this comment.
🛠️ Refactor suggestion
Implement proper healthcheck for webui service
The current healthcheck always returns true, which doesn't verify actual service health.
Since the TODO mentions installing curl, here's a proper healthcheck implementation:
healthcheck:
- # TODO: Modify Dockerfile to install curl and check if webui is started
- test: [ "CMD", "true" ]
+ test: [ "CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:5000/api/version" ]
interval: 5s
timeout: 3s
retries: 5
start_period: 10sNote: wget is often available in Alpine-based images. If not, the Dockerfile would need to be modified to include it.
Would you like me to help implement proper healthchecks for all services or create an issue to track this?
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| healthcheck: | |
| # TODO: Modify Dockerfile to install curl and check if webui is started | |
| test: [ "CMD", "true" ] | |
| interval: 5s | |
| timeout: 3s | |
| retries: 5 | |
| start_period: 10s | |
| healthcheck: | |
| test: [ "CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:5000/api/version" ] | |
| interval: 5s | |
| timeout: 3s | |
| retries: 5 | |
| start_period: 10s |
🤖 Prompt for AI Agents
In tests/integration/docker-compose.yaml around lines 109 to 116, the
healthcheck for the webui service currently always returns true and does not
verify actual service health. Update the healthcheck to use a command like curl
or wget to check if the webui endpoint is responsive. This may require modifying
the Dockerfile to install curl or wget. Replace the test command with a proper
HTTP request that returns success only if the webui is up.
| if len(payload) > 0: | ||
| decoded = payload.decode("utf-8", errors="replace") | ||
| # Truncate very long payloads | ||
| if len(decoded) > 1000: | ||
| decoded = decoded[:1000] + "\n... (truncated)" | ||
| log_lines.append(f" Payload:\n{decoded}") | ||
| else: | ||
| log_lines.append(" Payload: (empty)") | ||
| except Exception as e: | ||
| log_lines.append( | ||
| f" Payload (hex): {payload.hex()[:200]}{'...' if len(payload) > 100 else ''}" | ||
| ) | ||
|
|
There was a problem hiding this comment.
Remove unused exception variable
The exception variable e is assigned but never used in the exception handler.
except Exception as e:
log_lines.append(
f" Payload (hex): {payload.hex()[:200]}{'...' if len(payload) > 100 else ''}"
)Since the exception details aren't being logged, consider either using the exception or removing the variable:
- except Exception as e:
+ except Exception:
log_lines.append(
f" Payload (hex): {payload.hex()[:200]}{'...' if len(payload) > 100 else ''}"
)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if len(payload) > 0: | |
| decoded = payload.decode("utf-8", errors="replace") | |
| # Truncate very long payloads | |
| if len(decoded) > 1000: | |
| decoded = decoded[:1000] + "\n... (truncated)" | |
| log_lines.append(f" Payload:\n{decoded}") | |
| else: | |
| log_lines.append(" Payload: (empty)") | |
| except Exception as e: | |
| log_lines.append( | |
| f" Payload (hex): {payload.hex()[:200]}{'...' if len(payload) > 100 else ''}" | |
| ) | |
| if len(payload) > 0: | |
| decoded = payload.decode("utf-8", errors="replace") | |
| # Truncate very long payloads | |
| if len(decoded) > 1000: | |
| decoded = decoded[:1000] + "\n... (truncated)" | |
| log_lines.append(f" Payload:\n{decoded}") | |
| else: | |
| log_lines.append(" Payload: (empty)") | |
| except Exception: | |
| log_lines.append( | |
| f" Payload (hex): {payload.hex()[:200]}{'...' if len(payload) > 100 else ''}" | |
| ) |
🧰 Tools
🪛 Ruff (0.12.2)
202-202: Local variable e is assigned to but never used
Remove assignment to unused variable e
(F841)
🤖 Prompt for AI Agents
In tests/integration/sniffer/capturer.py around lines 194 to 206, the exception
handler assigns the variable 'e' but does not use it. To fix this, remove the
unused variable 'e' from the except clause so it reads 'except Exception:'
instead of 'except Exception as e:'. This cleans up the code by eliminating the
unused variable.
| def log_l4_packet(proto, src_ip, dst_ip, src_port, dst_port, payload): | ||
| log_lines = [ | ||
| f"----- L4+ PACKET -----", | ||
| f" Protocol: {proto}", | ||
| f" From: {src_ip}:{src_port}", | ||
| f" To: {dst_ip}:{dst_port}", | ||
| f" Payload: {payload[:200].hex() if isinstance(payload, bytes) else str(payload)[:200]}{'... (truncated)' if len(payload) > 200 else ''}", | ||
| "-" * 40, | ||
| ] | ||
| logging.debug("\n".join(log_lines)) | ||
|
|
There was a problem hiding this comment.
Remove unnecessary f-string prefix
The f-string on line 558 doesn't contain any placeholders.
def log_l4_packet(proto, src_ip, dst_ip, src_port, dst_port, payload):
log_lines = [
- f"----- L4+ PACKET -----",
+ "----- L4+ PACKET -----",
f" Protocol: {proto}",
f" From: {src_ip}:{src_port}",
f" To: {dst_ip}:{dst_port}",
f" Payload: {payload[:200].hex() if isinstance(payload, bytes) else str(payload)[:200]}{'... (truncated)' if len(payload) > 200 else ''}",
"-" * 40,
]
logging.debug("\n".join(log_lines))📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| def log_l4_packet(proto, src_ip, dst_ip, src_port, dst_port, payload): | |
| log_lines = [ | |
| f"----- L4+ PACKET -----", | |
| f" Protocol: {proto}", | |
| f" From: {src_ip}:{src_port}", | |
| f" To: {dst_ip}:{dst_port}", | |
| f" Payload: {payload[:200].hex() if isinstance(payload, bytes) else str(payload)[:200]}{'... (truncated)' if len(payload) > 200 else ''}", | |
| "-" * 40, | |
| ] | |
| logging.debug("\n".join(log_lines)) | |
| def log_l4_packet(proto, src_ip, dst_ip, src_port, dst_port, payload): | |
| log_lines = [ | |
| "----- L4+ PACKET -----", | |
| f" Protocol: {proto}", | |
| f" From: {src_ip}:{src_port}", | |
| f" To: {dst_ip}:{dst_port}", | |
| f" Payload: {payload[:200].hex() if isinstance(payload, bytes) else str(payload)[:200]}{'... (truncated)' if len(payload) > 200 else ''}", | |
| "-" * 40, | |
| ] | |
| logging.debug("\n".join(log_lines)) |
🧰 Tools
🪛 Ruff (0.12.2)
558-558: f-string without any placeholders
Remove extraneous f prefix
(F541)
🤖 Prompt for AI Agents
In tests/integration/sniffer/capturer.py around lines 556 to 566, the f-string
on line 558 is used without any placeholders, making the f-string prefix
unnecessary. Remove the 'f' prefix from the string on line 558 to convert it to
a regular string literal.
Refactors Dockerfiles and docker-bake.hcl to improve build processes and image management. - Changes gnbsim Dockerfile to use UnifyAir as the maintainer. - Modifies omnipath Dockerfile to support debug and release build modes via ARG. - Introduces omnipath-debug and omnipath-release targets in docker-bake.hcl. - Updates image tags to reflect the new UnifyAir organization and versioning scheme. - Removes unnecessary test certificates and keys. - Updates sniffer Dockerfile to use an image and install necessary packages, and set UnifyAir as the maintainer.
Summary by CodeRabbit
New Features
Tests
Chores
Documentation