Results 1 to 4 of 4

Thread: 2D runtimes and GPU cards

  1. #1
    Forum Moderator

    Innovyze Employee



    Join Date
    Feb 2013
    Posts
    18

    Lightbulb 2D runtimes and GPU cards

    Run-times for 2D models is a common talking point. Significant reductions in runtime can be achived when 2D models are processed on an NVIDIA GPU card rather than the basic PC processor (CPU).

    The Simulation Engine in InfoWorks ICM should work with all NVIDIA GPU cards that are CUDA enabled and have a Compute Level of 2.0 or better. A list of current nVidia GPU cards with CUDA support can be found at -> http://developer.nvidia.com/cuda-gpus. Obviously, we can’t be sure that the entire range of Compute Level 2.0 GPU cards from NVIDIA do actually work, or indeed what gains they offer, because we simply don’t have the resources to try them all with InfoWorks ICM! However, we have conducted tests using TESLA C2050 and C2075, Quadro 4000 and 2000 and GeForce GTX560Ti, GTX580, GTX 460M and GT 440 cards, which are some of the more popular devices, so we know for sure that they all work with InfoWorks ICM.

    There's some details of specific 2D runtimes on different CPU and GPU hardware in the Blog article Runtime comparisons for 2D models in InfoWorks ICM

    It’s important to note that your PC must have the latest, and official, NVIDIA drivers installed for the GPU to be able to process your 2D models. Generic drivers (commonly employed in corporate environments, where everyone is given a ‘standard’ install of Windows) will render the GPU card invisible to ICM.

    It’s very important to note that the GPU card plays no part at all in regular 1D calculations. Also, these cards give no advantage for 2D calculations in InfoWorks CS or RS and we will not be updating either of these applications to detect or utilise GPU technology.
    Last edited by Andrew Walker; February 27, 2013 at 06:17 AM.
    Andrew Walker
    Client Service Manager | Innovyze | Wallingford, UK
    Web: www.innovyze.com | Twitter: @innovyze

  2. #2
    Senior Member
    Join Date
    Feb 2013
    Posts
    125
    We've found mixed benefits - some sims take longer and some are quicker. It must be to do with the process of transferring data to the GPU taking CPU time, which is only regained if there's a lot of 2D work going on. We've noticed soem subtle differences in results as well.

  3. #3
    Forum Moderator

    Innovyze Employee



    Join Date
    Feb 2013
    Posts
    18
    Hi Kristian - You're observations are correct. There's quite an intensive process in transferring the 2D part of the model to the GPU and passing back the results. The more elements in the mesh, the longer this takes. If only a minority of the 2D mesh actually gets wet, or if the wet parts are only wet for a short time (compared with the overall simulation time), then the benefits of GPU technology will tail off. By far and away the biggest gain is seen when the entire mesh is wet for the majority of the simulation, such as when rainfall is applied directly to the mesh (which wets all the 2D elements right from the start of the simulation).

    I'm not surprised you are seeing slightly different answers between 2D runs conducted purely on the CPU and those off-loaded to the GPU. The 2D GPU engine is quite separate from the 2D engine that runs on the normal computer processor (CPU). The GPU engine is written using NVidia CUDA programming language. The CPU engines (1D and 2D) are a mixture of C++ and FORTRAN. The different languages and compilers will produce slightly different answers due to rounding in the minor decimal places. Given the millions of calculations performed during an ICM simulation there will always be slight differences in overall results.

    Interestingly, if you compare results produced by the 32-bit edition of ICM with those output by the 64-bit edition you'll also see slight differences, especially on very large models which are run for long simulation times. The cause is the same, the different compilers produce slightly different results due to rounding in the minor decimal places.
    Andrew Walker
    Client Service Manager | Innovyze | Wallingford, UK
    Web: www.innovyze.com | Twitter: @innovyze

  4. #4
    Senior Member
    Join Date
    Feb 2013
    Posts
    125
    Thanks, Andrew. I tend to do all sims for a given model either with or without the GPU for consistency - not that roundings matter much compared with uncertainties in the data, but it does look silly if your option ends up flooding a property that the base model doesn't, just because of that!

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •