{ "cells": [ { "cell_type": "markdown", "id": "novel-stupid", "metadata": {}, "source": [ "# DOLFINx in Parallel with MPI" ] }, { "cell_type": "markdown", "id": "drawn-entrepreneur", "metadata": {}, "source": [ "Authors: Jack S. Hale, Corrado Maurini.\n", "\n", "In scripts using DOLFINx you will have seen the use of code like `MPI.COMM_WORLD` and `x.ghostUpdate(addv=PETSc.InsertMode.INSERT, mode=PETSc.ScatterMode.FORWARD)`.\n", "\n", "This notebook aims to explain when and why you need to use them in your own scripts.\n", "\n", "Because notebooks do not elegantly support MPI we will execute the scripts from within the Jupyter Notebook using the shell magic `!`.\n", "\n", "### DOLFINx uses MPI-based parallelism\n", "\n", "DOLFINx uses the Message Passing Interface (MPI) model to execute your code in parallel.\n", "\n", "Simply put, MPI allows messages to be communicated very quickly between *processes* running on the same or even different computers (e.g. in a high-performance computing cluster).\n", "\n", "A very simplified description of MPI is as follows. \n", "\n", "1) $N$ *processes* are started within a *communicator*. \n", "2) The communicator containing all processes is called the *world* communicator. You will usually use the world communicator, although splitting the communicator is possible.\n", "3) Each process is given by the communicator a unique identifier called the *rank*.\n", "\n", "### Hello World\n", " " ] }, { "cell_type": "code", "execution_count": 1, "id": "powered-porter", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "print(\"Hello world\")" ] } ], "source": [ "!cat 01-hello-world.py" ] }, { "cell_type": "code", "execution_count": 25, "id": "emerging-opera", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Hello world\n" ] } ], "source": [ "!mpirun -n 1 python3 01-hello-world.py" ] }, { "cell_type": "code", "execution_count": 2, "id": "complete-damages", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Hello world\n", "Hello world\n" ] } ], "source": [ "!mpirun -n 2 python3 01-hello-world.py" ] }, { "cell_type": "markdown", "id": "according-guest", "metadata": {}, "source": [ "Two totally separate processes printed `Hello world` to the screen. Not very exciting!" ] }, { "cell_type": "markdown", "id": "liquid-ministry", "metadata": {}, "source": [ "### Hello World with MPI\n", "\n", "Python has makes MPI through the optional `mpi4py` package (https://mpi4py.readthedocs.io/en/stable/index.html)." ] }, { "cell_type": "code", "execution_count": 3, "id": "aging-looking", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "from mpi4py import MPI\n", "\n", "comm = MPI.COMM_WORLD\n", "print(f\"Hello world from rank {comm.rank} of {comm.size}\")" ] } ], "source": [ "!cat 02-hello-world-mpi.py" ] }, { "cell_type": "code", "execution_count": 4, "id": "comparable-construction", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Hello world from rank 0 of 1\n" ] } ], "source": [ "!mpirun -n 1 python3 02-hello-world-mpi.py" ] }, { "cell_type": "code", "execution_count": 5, "id": "increased-absence", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Hello world from rank 1 of 2\n", "Hello world from rank 0 of 2\n" ] } ], "source": [ "!mpirun -n 2 python3 02-hello-world-mpi.py" ] }, { "cell_type": "markdown", "id": "attached-nutrition", "metadata": {}, "source": [ "What happened? Two totally separate processes printed their rank (their unique identifier within the communicator) to the screen." ] }, { "cell_type": "markdown", "id": "close-purple", "metadata": { "tags": [] }, "source": [ "### Some basic communication" ] }, { "cell_type": "code", "execution_count": 6, "id": "frequent-contributor", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "from mpi4py import MPI\n", "\n", "comm = MPI.COMM_WORLD\n", "assert(comm.size == 2)\n", "\n", "if comm.rank == 0:\n", " b = 3\n", " c = 5\n", " a = b + c\n", " comm.send(a, dest=1, tag=20)\n", " print(f\"Rank {comm.rank} a: {a}\")\n", "elif comm.rank == 1:\n", " a = comm.recv(source=0, tag=20)\n", " print(f\"Rank {comm.rank} a: {a}\")" ] } ], "source": [ "!cat 03-communicate.py" ] }, { "cell_type": "code", "execution_count": 7, "id": "painful-joyce", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Rank 0 a: 8\n", "Rank 1 a: 8\n" ] } ], "source": [ "!mpirun -n 2 python3 03-communicate.py" ] }, { "cell_type": "markdown", "id": "handy-blank", "metadata": {}, "source": [ "MPI can do a lot more than this (https://mpi4py.readthedocs.io/en/stable/tutorial.html). The key points are:\n", "\n", "* $N$ identical versions of your program run, one on each process (rank). Each process takes different paths through the program depending on its *rank*.\n", "* Processes (and hence their memory) are totally independent.\n", "* Communication between processes is must be manually performed by the programmer (explicit)." ] }, { "cell_type": "markdown", "id": "spectacular-musician", "metadata": {}, "source": [ "### MPI and DOLFINx\n", "\n", "DOLFINx abstracts most of the difficult aspects of distributing your problem across the MPI communicator away from the user." ] }, { "cell_type": "code", "execution_count": 8, "id": "living-tooth", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "from mpi4py import MPI\n", "import dolfinx\n", "import dolfinx.io\n", "\n", "# DOLFINx uses mpi4py communicators.\n", "comm = MPI.COMM_WORLD\n", "\n", "def mpi_print(s):\n", " print(f\"Rank {comm.rank}: {s}\")\n", "\n", "# When you construct a mesh you must pass an MPI communicator.\n", "# The mesh will automatically be *distributed* over the ranks of the MPI communicator.\n", "# Important: In this script we use dolfinx.cpp.mesh.GhostMode.none.\n", "# This is *not* the default (dolfinx.cpp.mesh.GhostMode.shared_facet).\n", "# We will discuss the effects of the ghost_mode parameter in the next section.\n", "mesh = dolfinx.UnitSquareMesh(comm, 1, 1, diagonal=\"right\", ghost_mode=dolfinx.cpp.mesh.GhostMode.none)\n", "mesh.topology.create_connectivity_all()\n", "\n", "mpi_print(f\"Number of local cells: {mesh.topology.index_map(2).size_local}\")\n", "mpi_print(f\"Number of global cells: {mesh.topology.index_map(2).size_global}\")\n", "mpi_print(f\"Number of local vertices: {mesh.topology.index_map(0).size_local}\")\n", "mpi_print(\"Cell (dim = 2) to vertex (dim = 0) connectivity\")\n", "mpi_print(mesh.topology.connectivity(2, 0))" ] } ], "source": [ "!cat 04-mpi-dolfinx.py" ] }, { "cell_type": "code", "execution_count": 9, "id": "outer-generator", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Rank 0: Number of local cells: 2\n", "Rank 0: Number of global cells: 2\n", "Rank 0: Number of local vertices: 4\n", "Rank 0: Cell (dim = 2) to vertex (dim = 0) connectivity\n", "Rank 0: with 2 nodes\n", " 0: [0 1 3 ]\n", " 1: [0 2 3 ]\n", "\n" ] } ], "source": [ "!mpirun -n 1 python3 04-mpi-dolfinx.py" ] }, { "cell_type": "code", "execution_count": 10, "id": "unavailable-architecture", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Rank 0: Number of local cells: 1\n", "Rank 0: Number of global cells: 2\n", "Rank 0: Number of local vertices: 3\n", "Rank 0: Cell (dim = 2) to vertex (dim = 0) connectivity\n", "Rank 0: with 1 nodes\n", " 0: [1 0 2 ]\n", "\n", "Rank 1: Number of local cells: 1\n", "Rank 1: Number of global cells: 2\n", "Rank 1: Number of local vertices: 1\n", "Rank 1: Cell (dim = 2) to vertex (dim = 0) connectivity\n", "Rank 1: with 1 nodes\n", " 0: [2 0 1 ]\n", "\n" ] } ], "source": [ "!mpirun -n 2 python3 04-mpi-dolfinx.py" ] }, { "cell_type": "markdown", "id": "cooked-berry", "metadata": {}, "source": [ "Now we will run a similar script but with `ghost_mode=dolfinx.cpp.mesh.GhostMode.shared_facet` passed to the mesh constructor. It also prints a bit more output to help us understand what is going on." ] }, { "cell_type": "code", "execution_count": 11, "id": "nearby-territory", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "from mpi4py import MPI\n", "import dolfinx\n", "import dolfinx.io\n", "\n", "# DOLFINx uses mpi4py communicators.\n", "comm = MPI.COMM_WORLD\n", "\n", "def mpi_print(s):\n", " print(f\"Rank {comm.rank}: {s}\")\n", "\n", "# When you construct a mesh you must pass an MPI communicator.\n", "# The mesh will automatically be *distributed* over the ranks of the MPI communicator.\n", "# Important: In this script we use dolfinx.cpp.mesh.GhostMode.none.\n", "# This is *not* the default (dolfinx.cpp.mesh.GhostMode.shared_facet).\n", "# We will discuss the effects of the ghost_mode parameter in the next section.\n", "mesh = dolfinx.UnitSquareMesh(comm, 1, 1, diagonal=\"right\", ghost_mode=dolfinx.cpp.mesh.GhostMode.shared_facet)\n", "mesh.topology.create_connectivity_all()\n", "\n", "mpi_print(f\"Number of local cells: {mesh.topology.index_map(2).size_local}\")\n", "mpi_print(f\"Number of global cells: {mesh.topology.index_map(2).size_global}\")\n", "mpi_print(f\"Number of local vertices: {mesh.topology.index_map(0).size_local}\")\n", "mpi_print(f\"Number of global vertices: {mesh.topology.index_map(0).size_global}\")\n", "mpi_print(\"Cell (dim = 2) to vertex (dim = 0) connectivity\")\n", "mpi_print(mesh.topology.connectivity(2, 0))\n", "\n", "if comm.size == 1:\n", " mpi_print(\"Cell (dim = 2) to facet (dim = 0) connectivity\")\n", " mpi_print(mesh.topology.connectivity(2, 1))\n", " \n", "if comm.size == 2:\n", " mpi_print(f\"Ghost cells (global numbering): {mesh.topology.index_map(2).ghosts}\")\n", " mpi_print(f\"Ghost owner rank: {mesh.topology.index_map(2).ghost_owner_rank()}\")" ] } ], "source": [ "!cat 05-mpi-dolfinx-ghosts.py" ] }, { "cell_type": "code", "execution_count": 12, "id": "heard-montreal", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Rank 0: Number of local cells: 2\n", "Rank 0: Number of global cells: 2\n", "Rank 0: Number of local vertices: 4\n", "Rank 0: Number of global vertices: 4\n", "Rank 0: Cell (dim = 2) to vertex (dim = 0) connectivity\n", "Rank 0: with 2 nodes\n", " 0: [0 1 3 ]\n", " 1: [0 2 3 ]\n", "\n", "Rank 0: Cell (dim = 2) to facet (dim = 0) connectivity\n", "Rank 0: with 2 nodes\n", " 0: [3 2 0 ]\n", " 1: [4 2 1 ]\n", "\n" ] } ], "source": [ "!mpirun -n 1 python3 05-mpi-dolfinx-ghosts.py" ] }, { "cell_type": "markdown", "id": "reasonable-expression", "metadata": {}, "source": [ "There is no difference in the output when running on a MPI communicator with a single rank.\n", "\n", "However, when we run with two ranks we see something quite different.\n", "\n", "With the shared facet ghost mode enabled, each process will also store information about *some* cells owned by the neighbouring process. These cells are called *ghost cells*.\n", "\n", "In shared facet mode the logic of which cells are ghost cells is as follows:\n", "\n", "* All cells in the mesh share a common facet with one or more other cells.\n", "* The cells are partitioned between $N$ MPI ranks. The set of cells associated with each MPI rank is said to be *local* to or *owned* by the rank.\n", "* If two cells are connected by shared facet *and* are on different MPI ranks then the topological and geometrical information about the cell owned by the *other* rank is duplicated. This duplicated set of cells associated with the other rank are called the *ghost cells*." ] }, { "cell_type": "code", "execution_count": 13, "id": "cellular-error", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Rank 0: Number of local cells: 1\n", "Rank 0: Number of global cells: 2\n", "Rank 0: Number of local vertices: 3\n", "Rank 0: Number of global vertices: 4\n", "Rank 0: Cell (dim = 2) to vertex (dim = 0) connectivity\n", "Rank 0: with 2 nodes\n", " 0: [1 0 2 ]\n", " 1: [1 3 2 ]\n", "\n", "Rank 0: Ghost cells (global numbering): [1]\n", "Rank 0: Ghost owner rank: [1]\n", "Rank 1: Number of local cells: 1\n", "Rank 1: Number of global cells: 2\n", "Rank 1: Number of local vertices: 1\n", "Rank 1: Number of global vertices: 4\n", "Rank 1: Cell (dim = 2) to vertex (dim = 0) connectivity\n", "Rank 1: with 2 nodes\n", " 0: [2 0 1 ]\n", " 1: [2 3 1 ]\n", "\n", "Rank 1: Ghost cells (global numbering): [0]\n", "Rank 1: Ghost owner rank: [0]\n" ] } ], "source": [ "!mpirun -n 2 python3 05-mpi-dolfinx-ghosts.py" ] }, { "cell_type": "markdown", "id": "august-disaster", "metadata": {}, "source": [ "### FunctionSpace\n", "\n", "We will now look at how a ghosted `Mesh` creates a ghosted `FunctionSpace`.\n", "\n", "Consider a continuous first-order Lagrange space on the mesh. The degrees of freedom of this space are associated with the vertices of the mesh." ] }, { "cell_type": "code", "execution_count": 14, "id": "cathedral-administrator", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "from mpi4py import MPI\n", "import dolfinx\n", "import dolfinx.io\n", "\n", "comm = MPI.COMM_WORLD\n", "\n", "def mpi_print(s):\n", " print(f\"Rank {comm.rank}: {s}\")\n", "\n", "mesh = dolfinx.UnitSquareMesh(comm, 1, 1, diagonal=\"right\", ghost_mode=dolfinx.cpp.mesh.GhostMode.shared_facet)\n", "\n", "V = dolfinx.FunctionSpace(mesh, (\"CG\", 1))\n", "\n", "mpi_print(f\"Global size: {V.dofmap.index_map.size_global}\")\n", "mpi_print(f\"Local size: {V.dofmap.index_map.size_local}\")\n", "mpi_print(f\"Ghosts (global numbering): {V.dofmap.index_map.ghosts}\")\n" ] } ], "source": [ "!cat 06-mpi-dolfinx-function-space.py" ] }, { "cell_type": "code", "execution_count": 15, "id": "enclosed-compilation", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Rank 0: Global size: 4\n", "Rank 0: Local size: 4\n", "Rank 0: Ghosts (global numbering): []\n" ] } ], "source": [ "!mpirun -n 1 python3 06-mpi-dolfinx-function-space.py" ] }, { "cell_type": "code", "execution_count": 16, "id": "comparable-fruit", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Rank 0: Global size: 4\n", "Rank 0: Local size: 3\n", "Rank 0: Ghosts (global numbering): [3]\n", "Rank 1: Global size: 4\n", "Rank 1: Local size: 1\n", "Rank 1: Ghosts (global numbering): [2 0 1]\n" ] } ], "source": [ "!mpirun -n 2 python3 06-mpi-dolfinx-function-space.py" ] }, { "cell_type": "markdown", "id": "grave-garage", "metadata": {}, "source": [ "### Functions" ] }, { "cell_type": "markdown", "id": "hourly-upgrade", "metadata": {}, "source": [ "A `Function` is built from a `FunctionSpace`. It contains memory (an array) in which the expansion coefficients ($u_i$) of the finite element basis ($\\phi_i$) can be stored.\n", "\n", "$$u_h = \\sum_{i = 1}^4 \\phi_i u_i$$\n", "\n", "A `Function` built from a ghosted `FunctionSpace` has memory to store the expansion coefficients of the local degrees of freedom *and* the ghost degrees of freedom." ] }, { "cell_type": "code", "execution_count": 17, "id": "literary-principle", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "from mpi4py import MPI\n", "import dolfinx\n", "import dolfinx.io\n", "\n", "comm = MPI.COMM_WORLD\n", "\n", "def mpi_print(s):\n", " print(f\"Rank {comm.rank}: {s}\")\n", "\n", "mesh = dolfinx.UnitSquareMesh(comm, 1, 1, diagonal=\"right\", ghost_mode=dolfinx.cpp.mesh.GhostMode.shared_facet)\n", "\n", "V = dolfinx.FunctionSpace(mesh, (\"CG\", 1))\n", "\n", "u = dolfinx.Function(V)\n", "vector = u.vector\n", "\n", "mpi_print(f\"Local size of vector: {vector.getLocalSize()}\")\n", "\n", "# .localForm() allows us to access the local array with space for both owned and local degrees of freedom.\n", "with vector.localForm() as v_local:\n", " mpi_print(f\"Local + Ghost size of vector: {v_local.getLocalSize()}\")\n", " \n", "vector.ghostUpdate()\n" ] } ], "source": [ "!cat 07-mpi-dolfinx-function.py" ] }, { "cell_type": "code", "execution_count": 18, "id": "false-journalism", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Rank 0: Local size of vector: 4\n", "Rank 0: Local + Ghost size of vector: 4\n" ] } ], "source": [ "!mpirun -n 1 python3 07-mpi-dolfinx-function.py" ] }, { "cell_type": "code", "execution_count": 19, "id": "widespread-functionality", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Rank 0: Local size of vector: 3\n", "Rank 0: Local + Ghost size of vector: 4\n", "Rank 1: Local size of vector: 1\n", "Rank 1: Local + Ghost size of vector: 4\n" ] } ], "source": [ "!mpirun -n 2 python3 07-mpi-dolfinx-function.py" ] }, { "cell_type": "markdown", "id": "prescribed-audience", "metadata": {}, "source": [ "### Simple scattering\n", "\n", "Let's say we want to change the expansion coefficient $\\phi_0$ (local numbering) on both processes to have a value equal to the MPI rank + 1 of the owning process. For consistency we need this change to be reflected in two places:\n", "\n", "1. In the memory of the process that owns the degree of freedom.\n", "2. In the memory of the process (if any) that has the degree of freedom as a ghost.\n", "\n", "There are two ways to do this:\n", "\n", "1. Insert the values on both processes (i.e. four local set operations, some involving owned and some involving ghost dofs).\n", "2. Insert the values on the owning processes (i.e. two local set operations) and then scatter/communicate the values to the ghost dofs of the other process." ] }, { "cell_type": "code", "execution_count": 20, "id": "arctic-patio", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "from mpi4py import MPI\n", "from petsc4py import PETSc\n", "import dolfinx\n", "import dolfinx.io\n", "\n", "comm = MPI.COMM_WORLD\n", "\n", "def mpi_print(s):\n", " print(f\"Rank {comm.rank}: {s}\")\n", "\n", "mesh = dolfinx.UnitSquareMesh(comm, 1, 1, diagonal=\"right\", ghost_mode=dolfinx.cpp.mesh.GhostMode.shared_facet)\n", "\n", "V = dolfinx.FunctionSpace(mesh, (\"CG\", 1))\n", "\n", "u = dolfinx.Function(V)\n", "vector = u.vector\n", "\n", "# Set the value locally. No communication is performed.\n", "u.vector.setValueLocal(0, comm.rank + 1)\n", "\n", "# Print the local and ghosted memory to screen. Notice that the memory on each process is inconsistent.\n", "mpi_print(\"Before communication\")\n", "with vector.localForm() as v_local:\n", " mpi_print(v_local.array)\n", " \n", "vector.ghostUpdate(addv=PETSc.InsertMode.INSERT, mode=PETSc.ScatterMode.FORWARD)\n", "\n", "mpi_print(\"After communication\")\n", "with vector.localForm() as v_local:\n", " mpi_print(v_local.array)" ] } ], "source": [ "!cat 08-mpi-dolfinx-simple-scatter.py" ] }, { "cell_type": "code", "execution_count": 21, "id": "acknowledged-detroit", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Rank 0: Before communication\n", "Rank 0: [1. 0. 0. 0.]\n", "Rank 0: After communication\n", "Rank 0: [1. 0. 0. 2.]\n", "Rank 1: Before communication\n", "Rank 1: [2. 0. 0. 0.]\n", "Rank 1: After communication\n", "Rank 1: [2. 0. 1. 0.]\n" ] } ], "source": [ "!mpirun -n 2 python3 08-mpi-dolfinx-simple-scatter.py" ] }, { "cell_type": "markdown", "id": "operating-review", "metadata": {}, "source": [ "### Assembling vectors\n", "\n", "Now we want to assemble a linear form $L(v)$ into a vector $b$ with\n", "\n", "$$L(v) = \\int_{\\Omega} v \\; \\mathrm{d}x$$\n", "\n", "When we call ``dolfinx.fem.assemble_vector(L)`` the following happens:\n", "\n", "1. Each process calculates the cell tensors $b_T$ for cells that it owns.\n", "2. It assembles (adds) the result into its local array.\n", "\n", "At this point no parallel communication has taken place! The vector is in an inconsistent state. It should not be used.\n", "\n", "First, we need to take the values in the ghost regions and accumulate them into the owners values.\n", "\n", "`b.ghostUpdate(addv=PETSc.InsertMode.ADD, mode=PETSc.ScatterMode.REVERSE)`\n", "\n", "It is important to note that the ghosted part of the vector is still in an inconsistent state even after this call. However, it can be safely used for e.g. matrix-vector products (i.e. solving).\n", "\n", "To update the ghost values with values from the owner.\n", "\n", "`b.ghostUpdate(addv=PETSc.InsertMode.INSERT, mode=PETSc.ScatterMode.FORWARD)`\n", "\n", "After this call all owned and ghosted values on all processes are in a consistent state.\n" ] }, { "cell_type": "code", "execution_count": 22, "id": "fancy-vampire", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "from mpi4py import MPI\n", "from petsc4py import PETSc\n", "import dolfinx\n", "import dolfinx.io\n", "import ufl\n", "import os\n", "\n", "comm = MPI.COMM_WORLD\n", "\n", "def mpi_print(s):\n", " print(f\"Rank {comm.rank}: {s}\")\n", "\n", "mesh = dolfinx.UnitSquareMesh(comm, 1, 1, diagonal=\"right\", ghost_mode=dolfinx.cpp.mesh.GhostMode.shared_facet)\n", "\n", "V = dolfinx.FunctionSpace(mesh, (\"CG\", 1))\n", "\n", "u = dolfinx.Function(V)\n", "v = ufl.TestFunction(V)\n", "\n", "L = ufl.inner(1.0, v)*ufl.dx\n", "\n", "b = dolfinx.fem.assemble_vector(L)\n", "\n", "mpi_print(\"Before communication\")\n", "with b.localForm() as b_local:\n", " mpi_print(b_local.array)\n", " \n", "print(\"\\n\")\n", "\n", "# This call takes the values from the ghost regions and accumulates (adds) them to the owning process.\n", "b.ghostUpdate(addv=PETSc.InsertMode.ADD, mode=PETSc.ScatterMode.REVERSE)\n", "\n", "mpi_print(\"After ADD/REVERSE update\")\n", "with b.localForm() as b_local:\n", " mpi_print(b_local.array)\n", " \n", "print(\"\\n\")\n", "\n", "# Important point: The ghosts are still inconsistent!\n", "# This call takes the values from the owning processes and updates the ghosts.\n", "b.ghostUpdate(addv=PETSc.InsertMode.INSERT, mode=PETSc.ScatterMode.FORWARD)\n", "\n", "mpi_print(\"After INSERT/FORWARD update\")\n", "with b.localForm() as b_local:\n", " mpi_print(b_local.array)" ] } ], "source": [ "!cat 09-mpi-dolfinx-assemble-vector.py" ] }, { "cell_type": "code", "execution_count": 23, "id": "weekly-merchant", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Rank 0: Before communication\n", "Rank 0: [0.16666667 0.33333333 0.33333333 0.16666667]\n", "\n", "\n", "Rank 0: After ADD/REVERSE update\n", "Rank 0: [0.16666667 0.33333333 0.33333333 0.16666667]\n", "\n", "\n", "Rank 0: After INSERT/FORWARD update\n", "Rank 0: [0.16666667 0.33333333 0.33333333 0.16666667]\n" ] } ], "source": [ "!mpirun -n 1 python3 09-mpi-dolfinx-assemble-vector.py" ] }, { "cell_type": "code", "execution_count": 24, "id": "quarterly-accordance", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Rank 0: Before communication\n", "Rank 0: [0.16666667 0.16666667 0.16666667 0. ]\n", "\n", "\n", "Rank 0: After ADD/REVERSE update\n", "Rank 0: [0.33333333 0.16666667 0.33333333 0. ]\n", "\n", "\n", "Rank 0: After INSERT/FORWARD update\n", "Rank 0: [0.33333333 0.16666667 0.33333333 0.16666667]\n", "Rank 1: Before communication\n", "Rank 1: [0.16666667 0.16666667 0.16666667 0. ]\n", "\n", "\n", "Rank 1: After ADD/REVERSE update\n", "Rank 1: [0.16666667 0.16666667 0.16666667 0. ]\n", "\n", "\n", "Rank 1: After INSERT/FORWARD update\n", "Rank 1: [0.16666667 0.33333333 0.33333333 0.16666667]\n" ] } ], "source": [ "!mpirun -n 2 python3 09-mpi-dolfinx-assemble-vector.py" ] }, { "cell_type": "markdown", "id": "thousand-exchange", "metadata": {}, "source": [ "### Matrix-Vector products\n", "\n", "Explain basic aspects of a parallel matrix-vector product.\n", "\n", "Explain why having consistent ghost values on each rank is not necessary for correct matrix-vector product." ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.5" } }, "nbformat": 4, "nbformat_minor": 5 }