Tags: nrwl/nx
Tags
chore(repo): provision build toolchain via mise in publish workflow (#… …35593) ## Current Behavior The `publish` workflow's matrix builds (Linux/macOS/Windows native binaries via N-API) install Java, Node.js, and pnpm manually inside each runner/container. With `@nx/dotnet` now in `nx.json`, these builds also need .NET to be available before the project graph can be loaded — and there's no .NET install on any of the matrix entries today, so the workflow fails at the `pnpm nx run-many --target=build-native` step. The macOS and `armv7-unknown-linux-gnueabihf` matrix entries already use `mise-action` and `mise.toml`, but the four Linux *docker* entries (Debian + Alpine, x64 + arm64) bypass mise entirely and provision tools through hand-rolled `apt-get` / `apk` / `nodesource` / `npm i -g pnpm` steps. ## Expected Behavior - All four Linux docker matrix entries now install `mise` from a signed/distro source (apt repo at `https://mise.jdx.dev/deb` for Debian, `apk add mise` from Alpine `community` for Alpine) and provision their entire toolchain — Node.js, Java, .NET, Maven, corepack — from `mise.toml`. This drops ~30 lines of bespoke install logic per entry and keeps versions in lockstep with the non-docker matrix entries, which already use `mise-action`. - Windows entries gain `choco install dotnet-9.0-sdk -y` alongside the existing OpenJDK install (mise's Windows .NET path is broken upstream — see [jdx/mise#4738](jdx/mise#4738)). - The FreeBSD build sets `NX_DOTNET_DISABLE=true` (added to both the `env:` block and the `cross-platform-actions/action` `environment_variables` allowlist so the var actually crosses into the FreeBSD VM) to opt out of the plugin entirely. - `NODE_VERSION` is now forwarded into `docker run` so containers honor the workflow's pinned Node version through `mise.toml`'s tera template instead of falling back to its `24.11.0` default. - `mise` itself is installed only via signed repositories — no `curl https://mise.run | sh` — so a hijacked DNS lookup against `mise.run` cannot drop a malicious script into our publish pipeline. ## Related Issue(s) N/A — workflow fix triggered by `@nx/dotnet` being added to `nx.json`.
chore(testing): split NX_E2E_SKIP_CLEANUP into global/project-scoped … …vars (#35572) ## Current Behavior `NX_E2E_SKIP_CLEANUP` is set to `'true'` in every Linux/macOS e2e matrix entry to gate the build-cache reuse check in `e2e/utils/global-setup.ts` (skip wiping `e2eCwd` + republishing to verdaccio when `./build` already exists). PR #35042 added an early-return to `cleanupProject` in `e2e/utils/create-project-utils.ts` guarded by the same env var name to expose a *local* debug opt-in (preserve the per-test tmp project for inspection). Because CI already had the var set, the new early-return fires on every CI run, silently disabling the per-test `nx reset` + `tmpProjPath()` removal that previously kept orphan daemons from leaking. Jest hangs after all tests pass and the workflow times out at 60 minutes. ## Expected Behavior Each lifecycle hook is gated by a distinct, scope-specific env var: - `NX_E2E_SKIP_GLOBAL_CLEANUP` — global-setup.ts (CI sets it). - `NX_E2E_SKIP_PROJECT_CLEANUP` — cleanupProject (developer-set locally for debugging only). Per-test cleanup runs in CI again, jest exits cleanly, and nightly e2e jobs no longer hit the 60-minute cap. ## Validation Verified via a manually-dispatched e2e nightly run on this branch (with the matrix temporarily narrowed to `Linux/{npm,pnpm,yarn}/20 e2e-node`, the matrix that hung at 60 min on master): https://github.com/nrwl/nx/actions/runs/25375231501 — all 3 jobs passed in ~25 minutes each. --------- Co-authored-by: nx-cloud[bot] <71083854+nx-cloud[bot]@users.noreply.github.com>
fix(core): use workspace root for package manager detection in script… … targets (#35550) ## Current Behavior `readTargetsFromPackageJson` (in `packages/nx/src/utils/package-json.ts`) receives a `workspaceRoot` argument but never passes it to package manager detection: ```ts for (const script of includedScripts) { packageManagerCommand ??= getPackageManagerCommand(); // ← no workspaceRoot res[script] = buildTargetFromScript(script, scripts, packageManagerCommand); } ``` Two consequences: 1. **Wrong package manager** — `detectPackageManager()` defaults `dir = ''`, so the lockfile probe runs in the CWD, not the workspace. When that finds nothing it falls back to `npm_config_user_agent`, so the inferred `runCommand` (`npm run X` vs `pnpm run X` vs `yarn X`) on script targets ends up depending on whoever invoked the nx process rather than on the workspace's actual lockfile. 2. **Module-level cache** — `let packageManagerCommand` (cleared with `??=`) memoizes the first detection result across all subsequent calls in the process. So even if the first call had the right `workspaceRoot`, every later call inherits that detection regardless of *its* `workspaceRoot`. This is also why `packages/nx/src/plugins/package-json/create-nodes.spec.ts` had four pre-existing snapshot failures locally (`pnpm run …` instead of the expected `npm run …`) — the first test in any process locked detection to the host's PM. This is a follow-up to #35116, which moved package manager detection into the `createNodes` callback for the inferred plugins but missed this code path. ## Expected Behavior - Drop the module-level cache. - Thread `workspaceRoot` into both `detectPackageManager` and `getPackageManagerCommand`, so the lockfile probe runs in the right directory. ```ts if (includedScripts.length > 0) { const packageManagerCommand = getPackageManagerCommand( detectPackageManager(workspaceRoot), workspaceRoot ); for (const script of includedScripts) { res[script] = buildTargetFromScript(script, scripts, packageManagerCommand); } } ``` The `packages/nx/src/plugins/package-json/create-nodes.spec.ts` fixture now seeds `package-lock.json` into memfs in a `beforeEach`, matching the pattern #35116 established for plugin specs. Without the lockfile the detector still falls back to the env var; with it, every test deterministically picks `npm`, matching the existing snapshots. ## Verification Before this PR: 4 failures in `packages/nx/src/plugins/package-json/create-nodes.spec.ts`: ``` ✕ should build projects from package.json files ✕ should store js package metadata ✕ should add a script target if the sibling project.json file does not exist ✕ should add a script target if the sibling project.json exists but does not have a conflicting target Tests: 4 failed, 7 passed, 11 total ``` After this PR: ``` Tests: 11 passed, 11 total ``` ## Related Issue(s) Follow-up to #35116.
feat(gradle): stream batch task results to nx as they finish (#35487) ## Current Behavior The Gradle batch executor (`@nx/gradle:gradle` in batch mode) returns a `Promise<BatchResults>`. The Kotlin batch runner serializes the entire result map to a single JSON blob and `println`s it once at the end of the run. The Node-side executor accumulates stdout chunks and `JSON.parse`s them after the JVM exits, so Nx only learns about task outcomes in one burst when the whole batch is done. The Maven batch executor was migrated to streaming a while ago — it returns an `AsyncGenerator` and the Kotlin runner emits `NX_RESULT:{json}` lines as each task finishes — but the Gradle batch executor was never updated to match. ## Expected Behavior The Gradle batch executor now mirrors the Maven batch executor's streaming protocol, with per-task results streamed live during both **build** and **test** task execution. ### Kotlin runner (`packages/gradle/batch-runner`) - **`ResultEmitter`** writes `NX_RESULT:{json}` lines to stdout, one per task, with a thread-safe dedupe set so emission can happen from build/test listeners without double-reporting. - **`runBuildLauncher`** emits per build task as `TaskOutputCapture` detects the next task's `> Task :foo:bar` header, with an end-of-build flush for the final task. - **`runTestLauncher`** emits per Nx test task at its class-level `TestFinishEvent`, with a `TaskFinishEvent` fallback for tasks that never produced a class event (compile failure, exclusion). Method-level failures are sticky so a later passing method in the same class can't mask an earlier failure. - **`NxBatchRunner.main`** ends with `exitProcess(0)` so lingering non-daemon threads from the Gradle Tooling API can't keep the JVM alive after task work completes. The trailing `results.forEach { emit }` loop now only covers `finalizeTaskResults`-synthesized entries (excluded/skipped tasks). ### Node executor (`packages/gradle/src/executors/gradle/gradle-batch.impl.ts`) - `gradleBatch` is now an `async function*` returning `AsyncGenerator<{ task; result: TaskResult }>`. - `streamTasksInBatch` spawns the JVM, reads stdout via `readline`, and **drains `NX_RESULT` lines into an in-memory queue**, yielding from the queue. Yielding inside the readline loop creates back-pressure — slow consumers block `yield`, readline pauses, the OS pipe between Java and Node fills, and Java's `println` blocks on a full pipe. The queue decouples reading from yielding so back-pressure can no longer deadlock the JVM. - Stderr stays inherited so Gradle/JUnit progress flows to the terminal in real time. - Tasks the runner never reports get yielded as failed at the end so Nx never hangs. ### Project graph dependency `packages/gradle/project.json` adds `:gradle-batch-runner` to `implicitDependencies`. The gradle package bundles the batch-runner JAR and references it at runtime via `batchRunnerPath`; without this, `nx affected` wouldn't pick up gradle when only the runner changed. ### Bug along the way: className format mismatch `RegexTestParser.kt` records `testClassName` as the **simple** class name (e.g. `MyTest`), but Gradle's `JvmTestOperationDescriptor.className` is the **fully qualified** name (`com.example.MyTest`). The exact-match lookup in the test listener was failing for every class, so per-class `TestStartEvent`/`TestFinishEvent` never matched an Nx task — every Nx task fell through to the `TaskFinishEvent` fallback at the end of the Gradle test task, all sharing the same emission time and the cumulative shared output buffer. `resolveNxTaskId` now looks up by FQN first, then by the suffix after the last `.`, so events match either format. ### Known trade-off Tests under the same Gradle test task share the captured per-Gradle-task output for `terminalOutput`. With JUnit `--parallel` the bytes interleave anyway, and Gradle's `TestLauncher` doesn't expose per-test stdout segmentation through `setStandardOutput` — getting truly per-class `terminalOutput` would require either subscribing to `OperationType.TEST_OUTPUT` (which diverts stdout away from the standard output stream and didn't reliably fire for some setups in testing) or recording fully-qualified class names in the project-graph plugin so we can match `TestOutputEvent` parents precisely. Filed as a follow-up. The on-the-wire change matches the existing Maven contract (`run-batch.ts` already special-cases `isAsyncIterator`), so no Nx core changes are needed. ## Related Issue(s)
fix(core): show flaky-task count in run summary (#35491) ## Current Behavior When Nx's task-history life cycle detects more than one flaky task, the summary header renders with **two consecutive spaces and no number** in place of the count, e.g.: ``` > NX Nx detected flaky tasks myproject:test otherproject:e2e ``` The singular case (one flaky task) renders correctly as `Nx detected a flaky task`. ## Expected Behavior ``` > NX Nx detected 2 flaky tasks myproject:test otherproject:e2e ``` ## Root Cause Both `task-history-life-cycle.ts` and the legacy `task-history-life-cycle-old.ts` had: ```ts title: `Nx detected ${ this.flakyTasks.length === 1 ? 'a flaky task' : ' flaky tasks' }`, ``` The plural branch is a literal `' flaky tasks'` string with a leading space and **no count interpolation** — so the template renders `Nx detected ` + `' flaky tasks'` = `Nx detected flaky tasks` (two spaces, no number). ## Fix Replace the plural literal with `\`\${this.flakyTasks.length} flaky tasks\`` so the count appears between the leading space and the word `flaky`. Singular wording is unchanged. ```ts title: \`Nx detected \${ this.flakyTasks.length === 1 ? 'a flaky task' : \`\${this.flakyTasks.length} flaky tasks\` }\`, ``` Same fix applied symmetrically in both life-cycle files. ## Tests No existing unit test covers `printFlakyTasksMessage()`'s formatted output (the surrounding life cycles don't have a `*.spec.ts`). Adding one would require mocking the task-history daemon channel and life-cycle hooks — out of scope for a one-line formatting fix. The change is small enough to verify by inspection of the diff. ## Related Issue(s) (reported internally; no public issue)
fix(core): preserve hydrateFileMap back-compat for cached nx-cloud wo… …rkers (#35502) ## Current Behavior `nx@23.0.0-beta.2` Nx Cloud V4 distributed-agent workers crash on every task with: ``` Failed to get external value at new NativeTaskHasherImpl (.../native-task-hasher-impl.js:25:23) at new InProcessTaskHasher (.../task-hasher.js:68:27) at createTaskHasher (.../create-task-hasher.js:13:16) at createOrchestrator (.../init-tasks-runner.js:86:60) at runDiscreteTasks (.../init-tasks-runner.js:114:32) at executeAndStoreTask (.../discrete-task-worker.js:1:832861) ``` #34425 ("remove redundant `allWorkspaceFiles` from the project graph pipeline") changed two helpers in `packages/nx/src/project-graph/build-project-graph.ts`: - `hydrateFileMap(fileMap, allWorkspaceFiles, rustReferences)` → `hydrateFileMap(fileMap, rustReferences)` - `getFileMap()` no longer returns `allWorkspaceFiles` Cached Nx Cloud V4 workers (e.g. `.nx/cache/cloud/2604.29.7/lib/core/runners/distributed-agent/v4/discrete-task-worker.js`) `require('nx/src/project-graph/build-project-graph')` directly and still call the 3-arg form: ```js hydrateFileMap( { projectFileMap, nonProjectFiles }, allWorkspaceFiles, // lands in rustReferences slot on beta.2 rustReferences // silently dropped ); ``` The `FileData[]` array poisons `storedRustReferences`. Later `createTaskHasher` reads `.projectFiles` / `.allWorkspaceFiles` off the array (both `undefined`), passes them to `new TaskHasher(...)`, and napi-rs throws `"Failed to get external value"` trying to coerce `undefined` into `&External<Arc<…>>`. ## Expected Behavior `hydrateFileMap` accepts both the new 2-arg shape and the legacy 3-arg shape, detected by `Array.isArray()` on the 2nd argument. `getFileMap()` re-exposes `allWorkspaceFiles: []` so cached workers that destructure it (for telemetry / `v4log`) see the property instead of `undefined`. Both surfaces are flagged `@deprecated` so we can remove them in a later major once cached V4 workers age out. A regression test pins both arities — verified red on the pre-fix code and green with the fix. ## Related Issue(s) <!-- Reported via internal channels (ocean) — no public issue. -->
fix(angular): disable vitest watch by default (#35493) ## Current Behavior Generated Angular projects using `vitest-angular` create a test target without an explicit `watch` value. The Angular unit-test builder defaults watch mode to `true` in TTY environments, so running generated projects through monorepo workflows such as `nx run-many` can leave test tasks running instead of exiting. ## Expected Behavior Generated Angular `vitest-angular` test targets explicitly set `watch: false`, matching Nx Vitest's default non-watch behavior and preserving terminating test tasks for `run-many` and affected workflows. Users can still opt into watch mode with `--watch` or a watch configuration.
fix(nextjs): use cached project graph in withNx (#35475) ## Current Behavior `@nx/next/plugins/with-nx.ts` calls `createProjectGraphAsync()` from inside `next.config.js` evaluation. This has two negative effects: 1. **Sandbox violations.** Calling `createProjectGraphAsync` inside the build re-runs every registered Nx plugin's `createNodesV2`. For example, on `nx-dev:next:build` this generates 562 unexpected reads — including 548 files from `packages/nx/dist/**/*.js` (Nx core internals loaded by the graph machinery) plus sibling project files like `nx-dev/nx-dev-e2e/playwright.config.ts`, spec files, and `eslint.config.mjs` files that are read by `@nx/playwright/plugin` and `@nx/eslint/plugin` while inferring targets. None of these are real input dependencies of the Next.js build — they are an implementation detail of graph creation. 2. **Daemon socket leak (workaround in #34518).** The same call also opens a daemon client socket that keeps the Node event loop alive. PR #34518 patched this with `resetDaemonClient: true` after Jest started hanging in #32880. The socket exists only because we are talking to the daemon to (re)build the graph at all. Both problems share a root cause: graph creation is being run inside the build, when the graph has already been built and cached by the Nx task runner before `next build` ever starts. ## Expected Behavior `withNx` reads the already-cached graph instead of rebuilding it. This matches the pattern used by `@nx/webpack` (`packages/webpack/src/plugins/nx-webpack-plugin/lib/normalize-options.ts`) and `@nx/rspack` (`packages/rspack/src/plugins/utils/plugins/normalize-options.ts`), both of which call `readCachedProjectGraph()` with the comment _"Since this is invoked by the executor, the graph has already been created and cached."_ The early-return guard already in `withNx` (no `NX_TASK_TARGET_TARGET` env var) ensures we only reach the graph-reading branch when running inside an Nx task, which is exactly when the cached graph is guaranteed to exist. This change: - Eliminates the 562 sandbox violations on `nx-dev:next:build` (verified locally by patching `node_modules/@nx/next/plugins/with-nx.js` and re-running the build). - Removes the need for `resetDaemonClient: true` since no daemon connection is opened in the first place — also obviating the original Jest hang. - Speeds up `next build` slightly by skipping a full graph re-creation pass. ## Related Issue(s) Follow-up to #34518 / #32880 — fixes the underlying cause that the daemon-reset workaround was treating.
feat(core): add support for '...' as a spread token when merging targ… …et config (#34285) When merging project configurations (e.g., from plugins, `project.json`, target defaults in `nx.json`), array and object properties are completely replaced by the new value. There is no way to extend or merge with the base value. For example, if a plugin infers: ```json { "targets": { "build": { "inputs": ["default", "{projectRoot}/**/*"] } } } ``` And `nx.json` has target defaults: ```json { "targetDefaults": { "build": { "inputs": ["production"] } } } ``` The result would be `["production"]` — completely replacing the inferred inputs rather than combining them. This PR adds support for `"..."` as a spread token when merging configurations. Users can now control how arrays and objects are merged by specifying where the base value should be inserted. **Array spread:** ```json { "inputs": ["production", "...", "{workspaceRoot}/.eslintrc.json"] } ``` Results in: `["production", "default", "{projectRoot}/**/*", "{workspaceRoot}/.eslintrc.json"]` **Object spread:** ```json { "options": { "env": { "NEW_VAR": "value", "...": true, "OVERRIDE_VAR": "overridden" } } } ``` Spreads the base object's properties at the position of `"..."`, with keys defined after the spread taking precedence. This works in: - Top-level target properties (`inputs`, `outputs`, `dependsOn`) - Target `options` and nested option objects (one layer deep) - Target `configurations` and their options (one layer deep) - Both `project.json` merging and `nx.json` target defaults Fixes # --------- Co-authored-by: Claude <noreply@anthropic.com> Co-authored-by: nx-cloud[bot] <71083854+nx-cloud[bot]@users.noreply.github.com>
PreviousNext