This PR changes the type of four variables in the auxiliary matrix data structure to avoid the multiplication of integers and floating-point numbers during `hypre_IJMatrixSetAddValuesParCSRDevice`
Fixes for oneMKL sparse matmat and port of our custom spmv and spgemm routines to sycl. Note that this also involves significant updates to basic handling of kernel launches in sycl due to the need to support multi-dimensional kernels and the use of local shared memory.
This PR adds hypre_SeqVectorResize and hypre_ParVectorResize for resizing sequential and parallel vectors, respectively. This is useful for block-Krylov solvers/eigensolvers using BoomerAMG and multi-component vectors.
* Updated gcc compiler flags for strict-checking build option to throw floating point conversion warnings
* Several minor edits to clean up floating point conversion warnings and minor bugs.
* Updated saved files to reflect changes.
This PR adds hypre_ParCSRMatrixDiagScale for computing left and right parallel matrix scaling. The function also works when one of the scaling factors, which are stored as vectors, are not present. Regression tests have been added for this new function.
Introduce new AddFEMBoxValues() routines to improve system setup time when using the SStruct finite element interface. This initial implementation can produce significant speedups, but there is room for future optimizations.
Co-authored-by: Victor A. P. Magri <paludettomag1@llnl.gov>
This PR adds the function hypre_IntArrayCount for counting the number of occurrences of a value in a hypre_IntArray. Also, it moves device methods to a new file int_array_device.c.
Co-authored-by: Wayne Mitchell <mitchell82@llnl.gov>
This PR cleans the code for the warning Wstrict-prototypes. This flag was also added to the debug build of machine-tux.
Co-authored-by: Pierre Jolivet <pierre@joliv.et>
This PR adds two new search paths for the NVIDIA math libraries (cuSPARSE, cuBLAS, cuSOLVER). This fixes build issues on Polaris and Perlmutter.
* Add two new search paths for the NVIDIA math libs to configure
* Turn off CUDA math libs when CUDA is disabled
This PR fixes a few variable types inconsistencies arisen from the mixedint build. Additionally, it fixes the CUDA-11.1.1 build.
* Fix cuSPARSE version tag for using generic SpMM and new SpMV algorithms
* Bug fixes on hypre_ILU: S_row_starts computation and m -> big_m
* Bug fix: HYPRE_MPI_REAL -> HYPRE_MPI_COMPLEX
* Bug fix: HYPRE_Int -> HYPRE_BigInt
* Bug fix: HYPRE_MPI_INT -> HYPRE_MPI_BIG_INT
Co-authored-by: TotoGaz <49004943+TotoGaz@users.noreply.github.com>
* added changes required for the new AMG benchmark, including a new routine that returns wall clock time and some new parameters which generate cumulative numbers of nonzeros for A, coarse grid and prolongation operators in AMG
Several bug fixes and small changes for the sycl build. Addition of full regression testing on florentia with consistent and correct results for struct and ij tests with sycl backend.
This PR modifies hypre_ParCSRMatrixGenerateFFFC to act as a wrapper between the host and device implementations. Consequently, hypre_ParCSRMatrixGenerateFFFCHost has been added.
This PR adds HIP support to MGR. Additionally:
* Add sanity checks at Setup and Solve functions
* Fix a bug in the computation of P_FF on MGR when using GPUs.
* Enable AMG level profiling with HIP
* Enable ROCTX regions in the IJ driver
* Fixing the FEI interface to SuperLU_Dist. This uses the same structs
as src/parcsr_ls/dsuperlu.c.
* Updates to the CMake build system and other spots to support SuperLU_Dist, SuperLU, and the FEI.
Some Windows build bits.
* Reverting this change, it's just in the comments.
* Changes from Tim Dunn's FEI branch (feature/dunn13/tad220914-fei).
This prevents the solver from exiting out with a failure even though
the initial guess is the solution. There is also a fix to prevent a
floating point exception.
* Removing this #ifndef, as it is no longer necessary for our Windows build.
* Only enable CXX if building the FEI.
This PR updates hypre_device_allocator to use hypre's abstract memory model. This means that:
hypre configured with unified memory support: those allocations will be on managed memory.
hypre configured without unified memory support: those allocations will be on device memory.
This PR enables 2-stage GS relaxation to work properly with multi-component vectors.
* some optimizations to get 2-stage GS working faster
* Update loop unrolling at DiagScaleVector2
* computeY is a template argument now
Co-authored-by: Paul Mullowney <Paul.Mullowney@nrel.gov>