Mpich-Doc
- Constants(3) Meaning of MPI's defined constants
- hydra_nameserver(1) program to support MPI
- hydra_persist(1) Internal executable used by Hydra
- hydra_pmi_proxy(1) Internal executable used by Hydra
- MPI_Accumulate_c(3) Accumulate data into the target process using remote memory access
- MPI_Add_error_class(3) Add an MPI error class to the known classes
- MPI_Add_error_code(3) Add an MPI error code to an MPI error class
- MPI_Add_error_string(3) Associates an error string with an MPI error code or class
- MPI_Aint_add(3) Returns the sum of base and disp
- MPI_Aint_diff(3) Returns the difference between addr1 and addr2
- MPI_Allgather_c(3) Gathers data from all tasks and distribute the combined data to all tasks
- MPI_Allgather_init_c(3) Create a persistent request for allgather.
- MPI_Allgather_init(3) Create a persistent request for allgather.
- MPI_Allgatherv_c(3) Gathers data from all tasks and deliver the combined data to all tasks
- MPI_Allgatherv_init_c(3) Create a persistent request for allgatherv.
- MPI_Allgatherv_init(3) Create a persistent request for allgatherv.
- MPI_Allreduce_c(3) Combines values from all processes and distributes the result back to all processes
- MPI_Allreduce_init_c(3) Create a persistent request for allreduce
- MPI_Allreduce_init(3) Create a persistent request for allreduce
- MPI_Alltoall_c(3) Sends data from all to all processes
- MPI_Alltoall_init_c(3) Create a persistent request for alltoall.
- MPI_Alltoall_init(3) Create a persistent request for alltoall.
- MPI_Alltoallv_c(3) Sends data from all to all processes; each process may send a different amount of data and provide displacements for the input and output
- MPI_Alltoallv_init_c(3) Create a persistent request for alltoallv.
- MPI_Alltoallv_init(3) Create a persistent request for alltoallv.
- MPI_Alltoallw_c(3) Generalized all-to-all communication allowing different datatypes, counts, and displacements for each partner
- MPI_Alltoallw_init_c(3) Create a persistent request for alltoallw.
- MPI_Alltoallw_init(3) Create a persistent request for alltoallw.
- MPI_Alltoallw(3) Generalized all-to-all communication allowing different datatypes, counts, and displacements for each partner
- MPI_Barrier_init(3) Creates a persistent request for barrier
- MPI_Bcast_c(3) Broadcasts a message from the process with rank "root" to all other processes of the communicator
- MPI_Bcast_init_c(3) Creates a persistent request for broadcast
- MPI_Bcast_init(3) Creates a persistent request for broadcast
- MPI_Bsend_c(3) Basic send with user-provided buffering
- MPI_Bsend_init_c(3) Builds a handle for a buffered send
- MPI_Buffer_attach_c(3) Attaches a user-provided buffer for sending
- MPI_Buffer_detach_c(3) Removes an existing buffer (for use in MPI_Bsend etc)
- MPI_Comm_call_errhandler(3) Call the error handler installed on a communicator
- MPI_Comm_create_from_group(3) Create communicator from a group
- MPI_Comm_create_group(3) Creates a new communicator
- MPI_Comm_dup_with_info(3) Duplicates an existing communicator with all its cached information
- MPI_Comm_get_info(3) Returns a new info object containing the hints
- MPI_Comm_idup_with_info(3) nonblocking communicator duplication
- MPI_Comm_idup(3) nonblocking communicator duplication
- MPI_Comm_set_errhandler(3) Set the error handler for a communicator
- MPI_Comm_set_info(3) Set new values for the hints of the communicator associated with comm
- MPI_Comm_set_name(3) Sets the print name for a communicator
- MPI_Comm_split_type(3) Creates new communicators based on split types and keys
- MPI_Compare_and_swap(3) Perform one-sided atomic compare-and-swap.
- MPI_Dist_graph_create_adjacent(3) returns a handle to a new communicator to which the distributed graph topology information is attached
- MPI_Dist_graph_create(3) MPI_DIST_GRAPH_CREATE returns a handle to a new communicator to which the distributed graph topology information is
- MPI_Dist_graph_neighbors_count(3) Provides adjacency information for a distributed graph topology
- MPI_Dist_graph_neighbors(3) Provides adjacency information for a distributed graph topology
- MPI_Exscan_c(3) Computes the exclusive scan (partial reductions) of data on a collection of processes
- MPI_Exscan_init_c(3) Creates a persistent request for exscan
- MPI_Exscan_init(3) Creates a persistent request for exscan
- MPI_Exscan(3) Computes the exclusive scan (partial reductions) of data on a collection of processes
- MPI_Fetch_and_op(3) Perform one-sided read-modify-write.
- MPI_File_call_errhandler(3) Call the error handler installed on a file
- MPI_File_create_errhandler(3) Create a file error handler
- MPI_File_get_type_extent_c(3) Returns the extent of datatype in the file
- MPI_File_iread_all_c(3) Nonblocking collective read using individual file pointer
- MPI_File_iread_all(3) Nonblocking collective read using individual file pointer
- MPI_File_iread_at_all_c(3) Nonblocking collective read using explicit offset
- MPI_File_iread_at_all(3) Nonblocking collective read using explicit offset
- MPI_File_iread_at_c(3) Nonblocking read using explicit offset
- MPI_File_iread_c(3) Nonblocking read using individual file pointer
- MPI_File_iread_shared_c(3) Nonblocking read using shared file pointer
- MPI_File_iwrite_all_c(3) Nonblocking collective write using individual file pointer
- MPI_File_iwrite_all(3) Nonblocking collective write using individual file pointer
- MPI_File_iwrite_at_all_c(3) Nonblocking collective write using explicit offset
- MPI_File_iwrite_at_all(3) Nonblocking collective write using explicit offset
- MPI_File_iwrite_at_c(3) Nonblocking write using explicit offset
- MPI_File_iwrite_c(3) Nonblocking write using individual file pointer
- MPI_File_iwrite_shared_c(3) Nonblocking write using shared file pointer
- MPI_File_read_all_begin_c(3) Begin a split collective read using individual file pointer
- MPI_File_read_all_c(3) Collective read using individual file pointer
- MPI_File_read_at_all_begin_c(3) Begin a split collective read using explicit offset
- MPI_File_read_at_all_c(3) Collective read using explicit offset
- MPI_File_read_at_c(3) Read using explicit offset
- MPI_File_read_c(3) Read using individual file pointer
- MPI_File_read_ordered_begin_c(3) Begin a split collective read using shared file pointer
- MPI_File_read_ordered_c(3) Collective read using shared file pointer
- MPI_File_read_shared_c(3) Read using shared file pointer
- MPI_File_write_all_begin_c(3) Begin a split collective write using individual file pointer
- MPI_File_write_all_c(3) Collective write using individual file pointer
- MPI_File_write_at_all_begin_c(3) Begin a split collective write using explicit offset
- MPI_File_write_at_all_c(3) Collective write using explicit offset
- MPI_File_write_at_c(3) Write using explicit offset
- MPI_File_write_c(3) Write using individual file pointer
- MPI_File_write_ordered_begin_c(3) Begin a split collective write using shared file pointer
- MPI_File_write_ordered_c(3) Collective write using shared file pointer
- MPI_File_write_shared_c(3) Write using shared file pointer
- MPI_Gather_c(3) Gathers together values from a group of processes
- MPI_Gather_init_c(3) Create a persistent request for gather.
- MPI_Gather_init(3) Create a persistent request for gather.
- MPI_Gatherv_c(3) Gathers into specified locations from all processes in a group
- MPI_Gatherv_init_c(3) Create a persistent request for gatherv.
- MPI_Gatherv_init(3) Create a persistent request for gatherv.
- MPI_Get_accumulate_c(3) Perform an atomic, one-sided read-and-accumulate operation
- MPI_Get_accumulate(3) Perform an atomic, one-sided read-and-accumulate operation
- MPI_Get_c(3) Get data from a memory window on a remote process
- MPI_Get_count_c(3) Gets the number of "top level" elements
- MPI_Get_elements_c(3) Returns the number of basic elements
- MPI_Get_elements_x(3) Returns the number of basic elements
- MPI_Get_library_version(3) Return the version number of MPI library
- MPI_Grequest_complete(3) Notify MPI that a user-defined request is complete
- MPI_Grequest_start(3) Create and return a user-defined request
- MPI_Group_from_session_pset(3) Get group from a session processes set
- MPI_Iallgather_c(3) Gathers data from all tasks and distribute the combined data to all tasks in a nonblocking way
- MPI_Iallgather(3) Gathers data from all tasks and distribute the combined data to all tasks in a nonblocking way
- MPI_Iallgatherv_c(3) Gathers data from all tasks and deliver the combined data to all tasks in a nonblocking way
- MPI_Iallgatherv(3) Gathers data from all tasks and deliver the combined data to all tasks in a nonblocking way
- MPI_Iallreduce_c(3) Combines values from all processes and distributes the result back to all processes in a nonblocking way
- MPI_Iallreduce(3) Combines values from all processes and distributes the result back to all processes in a nonblocking way
- MPI_Ialltoall_c(3) Sends data from all to all processes in a nonblocking way
- MPI_Ialltoall(3) Sends data from all to all processes in a nonblocking way
- MPI_Ialltoallv_c(3) Sends data from all to all processes in a nonblocking way; each process may send a different amount of data and provide displacements for the input
- MPI_Ialltoallv(3) Sends data from all to all processes in a nonblocking way; each process may send a different amount of data and provide displacements for the input
- MPI_Ialltoallw_c(3) Nonblocking generalized all-to-all communication allowing different datatypes, counts, and displacements for each
- MPI_Ialltoallw(3) Nonblocking generalized all-to-all communication allowing different datatypes, counts, and displacements for each
- MPI_Ibarrier(3) Notifies the process that it has reached the barrier and returns immediately
- MPI_Ibcast_c(3) Broadcasts a message from the process with rank "root" to all other processes of the communicator in a nonblocking way
- MPI_Ibcast(3) Broadcasts a message from the process with rank "root" to all other processes of the communicator in a nonblocking way
- MPI_Ibsend_c(3) Starts a nonblocking buffered send
- MPI_Iexscan_c(3) Computes the exclusive scan (partial reductions) of data on a collection of processes in a nonblocking way
- MPI_Iexscan(3) Computes the exclusive scan (partial reductions) of data on a collection of processes in a nonblocking way
- MPI_Igather_c(3) Gathers together values from a group of processes in a nonblocking way
- MPI_Igather(3) Gathers together values from a group of processes in a nonblocking way
- MPI_Igatherv_c(3) Gathers into specified locations from all processes in a group in a nonblocking way
- MPI_Igatherv(3) Gathers into specified locations from all processes in a group in a nonblocking way
- MPI_Improbe(3) Nonblocking matched probe.
- MPI_Imrecv_c(3) Nonblocking receive of message matched by MPI_Mprobe or MPI_Improbe.
- MPI_Imrecv(3) Nonblocking receive of message matched by MPI_Mprobe or MPI_Improbe.
- MPI_Ineighbor_allgather_c(3) Nonblocking version of MPI_Neighbor_allgather.
- MPI_Ineighbor_allgather(3) Nonblocking version of MPI_Neighbor_allgather.
- MPI_Ineighbor_allgatherv_c(3) Nonblocking version of MPI_Neighbor_allgatherv.
- MPI_Ineighbor_allgatherv(3) Nonblocking version of MPI_Neighbor_allgatherv.
- MPI_Ineighbor_alltoall_c(3) Nonblocking version of MPI_Neighbor_alltoall.
- MPI_Ineighbor_alltoall(3) Nonblocking version of MPI_Neighbor_alltoall.
- MPI_Ineighbor_alltoallv_c(3) Nonblocking version of MPI_Neighbor_alltoallv.
- MPI_Ineighbor_alltoallv(3) Nonblocking version of MPI_Neighbor_alltoallv.
- MPI_Ineighbor_alltoallw_c(3) Nonblocking version of MPI_Neighbor_alltoallw.
- MPI_Ineighbor_alltoallw(3) Nonblocking version of MPI_Neighbor_alltoallw.
- MPI_Info_create_env(3) Creates an info object containing information about the application
- MPI_Info_get_string(3) Retrieves the value associated with a key
- MPI_Intercomm_create_from_groups(3) Create an intercommuncator from local and remote groups
- MPI_Irecv_c(3) Begins a nonblocking receive
- MPI_Ireduce_c(3) Reduces values on all processes to a single value in a nonblocking way
- MPI_Ireduce_scatter_block_c(3) Combines values and scatters the results in a nonblocking way
- MPI_Ireduce_scatter_block(3) Combines values and scatters the results in a nonblocking way
- MPI_Ireduce_scatter_c(3) Combines values and scatters the results in a nonblocking way
- MPI_Ireduce_scatter(3) Combines values and scatters the results in a nonblocking way
- MPI_Ireduce(3) Reduces values on all processes to a single value in a nonblocking way
- MPI_Irsend_c(3) Starts a nonblocking ready send
- MPI_Iscan_c(3) Computes the scan (partial reductions) of data on a collection of processes in a nonblocking way
- MPI_Iscan(3) Computes the scan (partial reductions) of data on a collection of processes in a nonblocking way
- MPI_Iscatter_c(3) Sends data from one process to all other processes in a communicator in a nonblocking way
- MPI_Iscatter(3) Sends data from one process to all other processes in a communicator in a nonblocking way
- MPI_Iscatterv_c(3) Scatters a buffer in parts to all processes in a communicator in a nonblocking way
- MPI_Iscatterv(3) Scatters a buffer in parts to all processes in a communicator in a nonblocking way
- MPI_Isend_c(3) Begins a nonblocking send
- MPI_Isendrecv_c(3) Starts a nonblocking send and receive
- MPI_Isendrecv_replace_c(3) Starts a nonblocking send and receive with a single buffer
- MPI_Isendrecv_replace(3) Starts a nonblocking send and receive with a single buffer
- MPI_Isendrecv(3) Starts a nonblocking send and receive
- MPI_Issend_c(3) Starts a nonblocking synchronous send
- MPI_Mprobe(3) Blocking matched probe.
- MPI_Mrecv_c(3) Blocking receive of message matched by MPI_Mprobe or MPI_Improbe.
- MPI_Mrecv(3) Blocking receive of message matched by MPI_Mprobe or MPI_Improbe.
- MPI_Neighbor_allgather_c(3) Gathers data from all neighboring processes and distribute the combined data to all neighboring processes
- MPI_Neighbor_allgather_init_c(3) Create a persistent request for neighbor_allgather.
- MPI_Neighbor_allgather_init(3) Create a persistent request for neighbor_allgather.
- MPI_Neighbor_allgather(3) Gathers data from all neighboring processes and distribute the combined data to all neighboring processes
- MPI_Neighbor_allgatherv_c(3) The vector variant of MPI_Neighbor_allgather.
- MPI_Neighbor_allgatherv_init_c(3) Create a persistent request for neighbor_allgatherv.
- MPI_Neighbor_allgatherv_init(3) Create a persistent request for neighbor_allgatherv.
- MPI_Neighbor_allgatherv(3) The vector variant of MPI_Neighbor_allgather.
- MPI_Neighbor_alltoall_c(3) Sends and Receivs data from all neighboring processes
- MPI_Neighbor_alltoall_init_c(3) Create a persistent request for neighbor_alltoall.
- MPI_Neighbor_alltoall_init(3) Create a persistent request for neighbor_alltoall.
- MPI_Neighbor_alltoall(3) Sends and Receivs data from all neighboring processes
- MPI_Neighbor_alltoallv_c(3) The vector variant of MPI_Neighbor_alltoall allows sending/receiving different numbers of elements to and from each
- MPI_Neighbor_alltoallv_init_c(3) Create a persistent request for neighbor_alltoallv.
- MPI_Neighbor_alltoallv_init(3) Create a persistent request for neighbor_alltoallv.
- MPI_Neighbor_alltoallv(3) The vector variant of MPI_Neighbor_alltoall allows sending/receiving different numbers of elements to and from each
- MPI_Neighbor_alltoallw_c(3) Like MPI_Neighbor_alltoallv but it allows one to send and receive with different types to and from each neighbor.
- MPI_Neighbor_alltoallw_init_c(3) Create a persistent request for neighbor_alltoallw.
- MPI_Neighbor_alltoallw_init(3) Create a persistent request for neighbor_alltoallw.
- MPI_Neighbor_alltoallw(3) Like MPI_Neighbor_alltoallv but it allows one to send and receive with different types to and from each neighbor.
- MPI_Op_commutative(3) Queries an MPI reduction operation for its commutativity
- MPI_Op_create_c(3) Creates a user-defined combination function handle
- MPI_Pack_c(3) Packs a datatype into contiguous memory
- MPI_Pack_external_c(3) Packs a datatype into contiguous memory, using the external32 format
- MPI_Pack_external_size_c(3) Returns the upper bound on the amount of space needed to pack a message using MPI_Pack_external.
- MPI_Pack_external_size(3) Returns the upper bound on the amount of space needed to pack a message using MPI_Pack_external.
- MPI_Pack_external(3) Packs a datatype into contiguous memory, using the external32 format
- MPI_Pack_size_c(3) Returns the upper bound on the amount of space needed to pack a message
- MPI_Parrived(3) Test partial completion of partitioned receive operations
- MPI_Pready_list(3) Indicates that a list of portions of the send buffer in a partitioned
- MPI_Pready_range(3) Indicates that a given range of the send buffer in a partitioned
- MPI_Pready(3) Indicates that a given portion of the send buffer in a partitioned
- MPI_Precv_init(3) Creates a partitioned communication receive request
- MPI_Psend_init(3) Creates a partitioned communication send request
- MPI_Put_c(3) Put data into a memory window on a remote process
- MPI_Raccumulate_c(3) Accumulate data into the target process using remote memory access
- MPI_Raccumulate(3) Accumulate data into the target process using remote memory access
- MPI_Recv_c(3) Blocking receive for a message
- MPI_Recv_init_c(3) Create a persistent request for a receive
- MPI_Reduce_c(3) Reduces values on all processes to a single value
- MPI_Reduce_init_c(3) Create a persistent request for reduce
- MPI_Reduce_init(3) Create a persistent request for reduce
- MPI_Reduce_local_c(3) Applies a reduction operator to local arguments.
- MPI_Reduce_local(3) Applies a reduction operator to local arguments.
- MPI_Reduce_scatter_block_c(3) Combines values and scatters the results
- MPI_Reduce_scatter_block_init_c(3) Create a persistent request for reduce_scatter_block.
- MPI_Reduce_scatter_block_init(3) Create a persistent request for reduce_scatter_block.
- MPI_Reduce_scatter_block(3) Combines values and scatters the results
- MPI_Reduce_scatter_c(3) Combines values and scatters the results
- MPI_Reduce_scatter_init_c(3) Create a persistent request for reduce_scatter.
- MPI_Reduce_scatter_init(3) Create a persistent request for reduce_scatter.
- MPI_Register_datarep_c(3) Register functions for user-defined data representations
- MPI_Register_datarep(3) Register functions for user-defined data representations
- MPI_Request_get_status(3) Nondestructive test for the completion of a Request
- MPI_Rget_accumulate_c(3) Perform an atomic, one-sided read-and-accumulate
- MPI_Rget_accumulate(3) Perform an atomic, one-sided read-and-accumulate
- MPI_Rget_c(3) Get data from a memory window on a remote process
- MPI_Rget(3) Get data from a memory window on a remote process
- MPI_Rput_c(3) Put data into a memory window on a remote process and return a request
- MPI_Rput(3) Put data into a memory window on a remote process and return a request
- MPI_Rsend_c(3) Blocking ready send
- MPI_Rsend_init_c(3) Creates a persistent request for a ready send
- MPI_Scan_c(3) Computes the scan (partial reductions) of data on a collection of processes
- MPI_Scan_init_c(3) Create a persistent request for scan.
- MPI_Scan_init(3) Create a persistent request for scan.
- MPI_Scatter_c(3) Sends data from one process to all other processes in a communicator
- MPI_Scatter_init_c(3) Create a persistent request for scatter.
- MPI_Scatter_init(3) Create a persistent request for scatter.
- MPI_Scatterv_c(3) Scatters a buffer in parts to all processes in a communicator
- MPI_Scatterv_init_c(3) Create a persistent request for scatterv.
- MPI_Scatterv_init(3) Create a persistent request for scatterv.
- MPI_Send_c(3) Performs a blocking send
- MPI_Send_init_c(3) Create a persistent request for a standard send
- MPI_Sendrecv_c(3) Sends and receives a message
- MPI_Sendrecv_replace_c(3) Sends and receives using a single buffer
- MPI_Session_call_errhandler(3) Call the error handler installed on a MPI session
- MPI_Session_create_errhandler(3) Create an error handler for use with MPI session
- MPI_Session_finalize(3) Finalize an MPI Session
- MPI_Session_get_errhandler(3) Get the error handler for the MPI session
- MPI_Session_get_info(3) Get the info hints associated to the session
- MPI_Session_get_nth_pset(3) Get the nth processes set
- MPI_Session_get_num_psets(3) Get number of available processes sets
- MPI_Session_get_pset_info(3) Get the info associated with the processes set
- MPI_Session_init(3) Initialize an MPI session
- MPI_Session_set_errhandler(3) Set MPI session error handler
- MPI_Ssend_c(3) Blocking synchronous send
- MPI_Ssend_init_c(3) Creates a persistent request for a synchronous send
- MPI_Status_set_cancelled(3) Sets the cancelled state associated with a request
- MPI_Status_set_elements_x(3) Set the number of elements in a status
- MPI_Status_set_elements(3) Set the number of elements in a status
- MPI_T_category_changed(3) Get the timestamp indicating the last change to the categories
- MPI_T_category_get_categories(3) Get sub-categories in a category
- MPI_T_category_get_cvars(3) Get control variables in a category
- MPI_T_category_get_events(3) Query which event types are contained in a particular category.
- MPI_T_category_get_index(3) Get the index of a category
- MPI_T_category_get_info(3) Get the information about a category
- MPI_T_category_get_num_events(3) Returns the number of event types contained in the queried category.
- MPI_T_category_get_num(3) Get the number of categories
- MPI_T_category_get_pvars(3) Get performance variables in a category
- MPI_T_cvar_get_index(3) Get the index of a control variable
- MPI_T_cvar_get_info(3) Get the information about a control variable
- MPI_T_cvar_get_num(3) Get the number of control variables
- MPI_T_cvar_handle_alloc(3) Allocate a handle for a control variable
- MPI_T_cvar_handle_free(3) Free an existing handle for a control variable
- MPI_T_cvar_read(3) Read the value of a control variable
- MPI_T_cvar_write(3) Write a control variable
- MPI_T_enum_get_info(3) Get the information about an enumeration
- MPI_T_enum_get_item(3) Get the information about an item in an enumeration
- MPI_T_event_callback_get_info(3) Returns a new info object containing the hints of the callback function registered for the callback safety level specified by cb_safety of the
- MPI_T_event_callback_set_info(3) Updates the hints of the callback function registered for the callback safety level specified by cb_safety of the event-registration handle
- MPI_T_event_copy(3) Copy event data as a whole into a user-specified buffer
- MPI_T_event_get_index(3) Returns the index of an event type identified by a known event type name
- MPI_T_event_get_info(3) Returns additional information about a specific event type
- MPI_T_event_get_num(3) Query the number of event types
- MPI_T_event_get_source(3) Returns the index of the source of the event instance
- MPI_T_event_get_timestamp(3) Returns the timestamp of when the event was initially observed by the implementation
- MPI_T_event_handle_alloc(3) Creates a registration handle for the event type identified by event_index
- MPI_T_event_handle_free(3) Initiates deallocation of the event-registration handle specified by event_registration
- MPI_T_event_handle_get_info(3) Returns a new info object containing the hints of the event-registration handle associated with event_registration
- MPI_T_event_handle_set_info(3) Updates the hints of the event-registration handle associated with event_registration
- MPI_T_event_read(3) Copy one element of event data to a user-specified buffer
- MPI_T_event_register_callback(3) Associates a user-defined function with an allocated event-registration handle
- MPI_T_event_set_dropped_handler(3) Registers a function to be called when event information is dropped for the registration handle specified in
- MPI_T_finalize(3) Finalize the MPI tool information interface
- MPI_T_init_thread(3) Initialize the MPI_T execution environment
- MPI_T_pvar_get_index(3) Get the index of a performance variable
- MPI_T_pvar_get_info(3) Get the information about a performance variable
- MPI_T_pvar_get_num(3) Get the number of performance variables
- MPI_T_pvar_handle_alloc(3) Allocate a handle for a performance variable
- MPI_T_pvar_handle_free(3) Free an existing handle for a performance variable
- MPI_T_pvar_read(3) Read the value of a performance variable
- MPI_T_pvar_readreset(3) Read the value of a performance variable and then reset it
- MPI_T_pvar_reset(3) Reset a performance variable
- MPI_T_pvar_session_create(3) Create a new session for accessing performance variables
- MPI_T_pvar_session_free(3) Free an existing performance variable session
- MPI_T_pvar_start(3) Start a performance variable
- MPI_T_pvar_stop(3) Stop a performance variable
- MPI_T_pvar_write(3) Write a performance variable
- MPI_T_source_get_info(3) Returns additional information on the source identified by the source_index argument
- MPI_T_source_get_num(3) Query the number of event sources
- MPI_T_source_get_timestamp(3) Returns a current timestamp from the source identified by the source_index argument
- MPI_Type_contiguous_c(3) Creates a contiguous datatype
- MPI_Type_create_darray_c(3) Create a datatype representing a distributed array
- MPI_Type_create_hindexed_block_c(3) Create an hindexed datatype with constant-sized blocks
- MPI_Type_create_hindexed_block(3) Create an hindexed datatype with constant-sized blocks
- MPI_Type_create_hindexed_c(3) Create a datatype for an indexed datatype with displacements in bytes
- MPI_Type_create_hvector_c(3) Create a datatype with a constant stride given in bytes
- MPI_Type_create_indexed_block_c(3) Create an indexed datatype with constant-sized blocks
- MPI_Type_create_indexed_block(3) Create an indexed datatype with constant-sized blocks
- MPI_Type_create_resized_c(3) Create a datatype with a new lower bound and extent from an existing datatype
- MPI_Type_create_resized(3) Create a datatype with a new lower bound and extent from an existing datatype
- MPI_Type_create_struct_c(3) Create an MPI datatype from a general set of datatypes, displacements, and block sizes
- MPI_Type_create_subarray_c(3) Create a datatype for a subarray of a regular, multidimensional array
- MPI_Type_get_contents_c(3) get type contents
- MPI_Type_get_envelope_c(3) get type envelope
- MPI_Type_get_extent_c(3) Get the lower bound and extent for a Datatype
- MPI_Type_get_extent_x(3) Get the lower bound and extent as MPI_Count values for a datatype
- MPI_Type_get_true_extent_c(3) Get the true lower bound and extent for a datatype
- MPI_Type_get_true_extent_x(3) Get the true lower bound and extent as MPI_Count values for a datatype
- MPI_Type_indexed_c(3) Creates an indexed datatype
- MPI_Type_match_size(3) Find an MPI datatype matching a specified size
- MPI_Type_size_c(3) Return the number of bytes occupied by entries in the datatype
- MPI_Type_size_x(3) Return the number of bytes occupied by entries in the datatype
- MPI_Type_vector_c(3) Creates a vector (strided) datatype
- MPI_Unpack_c(3) Unpack a buffer according to a datatype into contiguous memory
- MPI_Unpack_external_c(3) Unpack a buffer (packed with MPI_Pack_external) according to a datatype into contiguous memory
- MPI_Unpack_external(3) Unpack a buffer (packed with MPI_Pack_external) according to a datatype into contiguous memory
- MPI_Unpublish_name(3) Unpublish a service name published with MPI_Publish_name
- MPI_Win_allocate_c(3) Create and allocate an MPI Window object for one-sided communication
- MPI_Win_allocate_shared_c(3) Create an MPI Window object for one-sided communication and shared memory access, and allocate memory at each process
- MPI_Win_allocate_shared(3) Create an MPI Window object for one-sided communication and shared memory access, and allocate memory at each process
- MPI_Win_allocate(3) Create and allocate an MPI Window object for one-sided communication
- MPI_Win_attach(3) Attach memory to a dynamic window
- MPI_Win_create_c(3) Create an MPI Window object for one-sided communication
- MPI_Win_create_dynamic(3) Create an MPI Window object for one-sided communication
- MPI_Win_detach(3) Detach memory from a dynamic window
- MPI_Win_flush_all(3) Complete all outstanding RMA operations at all targets
- MPI_Win_flush_local_all(3) Complete locally all outstanding RMA operations at all targets
- MPI_Win_flush_local(3) Complete locally all outstanding RMA operations at the given target
- MPI_Win_flush(3) Complete all outstanding RMA operations at the given target
- MPI_Win_get_info(3) Returns a new info object containing the hints of the window
- MPI_Win_lock_all(3) Begin an RMA access epoch at all processes on the given window
- MPI_Win_set_info(3) Set new values for the hints of the window associated with win
- MPI_Win_shared_query_c(3) Query the size and base pointer for a patch of a shared memory window
- MPI_Win_shared_query(3) Query the size and base pointer for a patch of a shared memory window
- MPI_Win_sync(3) Synchronize public and private copies of the given window
- MPI_Win_unlock_all(3) Completes an RMA access epoch at all processes on the given window
- mpicc(1) Compiles and links MPI programs written in C
- mpicxx(1) Compiles and links MPI programs written in C++
- mpiexec(1) Run an MPI program
- mpifort(1) Compiles and links MPI programs written in Fortran 90
- MPIX_Comm_agree(3) Performs agreement operation on comm
- MPIX_Comm_failure_ack(3) Acknowledge the current group of failed processes
- MPIX_Comm_failure_get_acked(3) Get the group of acknowledged failures.
- MPIX_Comm_revoke(3) Prevent a communicator from being used in the future
- MPIX_Comm_shrink(3) Creates a new communitor from an existing communicator while excluding failed processes
- MPIX_Delete_error_class(3) Delete an MPI error class from the known classes
- MPIX_Delete_error_code(3) Delete an MPI error code
- MPIX_Delete_error_string(3) Delete the error string associated with an MPI error code or class
- MPIX_GPU_query_support(3) Returns whether the specific type of GPU is supported
- MPIX_Grequest_class_allocate(3) Create and return a user-defined extended request based on a generalized request class
- MPIX_Grequest_class_create(3) Input Parameters: + query_fn - callback function invoked when request status is queried (function) .
- MPIX_Grequest_start(3) Create and return a user-defined extended request
- MPIX_Query_cuda_support(3) Returns whether CUDA is supported
- MPIX_Query_hip_support(3) Returns whether HIP (AMD GPU) is supported
- MPIX_Query_ze_support(3) Returns whether ZE (Intel GPU) is supported