NCC Norway

Short Description about the Norwegian NCC

SINTEF, NORCE and Sigma2 have joined forces to establish The Norwegian Euro Competence Centre.    

The Competence Centre raises awareness and provides Norwegian SMEs with the expertise necessary to take advantage of the innovation possibilities created by HPC (High-Performance Computing) technology. This includes HPDA (High-Performance Data Analytics), ML (Machine learning) and AI (Artificial intelligence), thus increasing the SME’s competitiveness.


Test before invest

The Competence Centre partner Sigma2 implements the national e-infrastructure for public research, which includes resources for many types of computational needs including AI. This infrastructure is funded by the Research Council of Norway together with the four oldest universities in Norway.

Through Sigma2, the Competence Centre can provide free of charge Small Scale Explorative Work (SSEW) services to try out an idea or for testing before investing (up to 20.000 CPU hours). Sigma2 is also allowed to sell up to 5% of the total capacity to commercial customers, but state regulations require that such services are sold at prices comparable to open market price. 

Projects with the NCC involvement

The competence centre has engaged in projects with several SMEs within diverse domains: 

  • AI workflow within agriculture for automatic detection of field boundaries for accurately detecting recommended seed type and quantity estimations. The NCC helps the SME to scale the workflow for expanding the business globally. 
     
  • Introducing machine learning within industrial production to help reduce scarp and production stop. 
     
  • Large scale modelling of a bubble curtain in weather models proving reduction of energy in hurricanes before field testing. 
     
  • HPC based testing environment for a digital twin of zero-emission autonomous public transportation. 
     
  • Scaling an artificial intelligence-based method of natural language document processing for ensuring building documentation can be searched and indexed. 
     
  • Scaling an electricity market modelling from current HPC use to larger HPC, increasing the resolution and moving from stochastic modelling to developed models to producing better data for decision making.

Mentoring and consulting

  • software development 
  • Maintenance 
  • Testing 
  • Execution 
  • optimizations, and 
  • data management and analysis towards scaling in highly parallel environments.  

Furthermore, we aid in the process of understanding the potential of, implementing, and optimizing digital workflows, adapting and implementing digital solutions to specific domains that are important to your business. This also includes contact with domain-specific and business development competencies.  

This makes it possible to abstract away the technicalities of HPC, HPDA, AI, ML and others so that you can spend time on discovering and creating value in environments known to your employees.  

These types of services are free of charge for industrial users and public services, enabled by the EuroCC project. 

The core mission of EuroCC is to aid industry and public services in increasing your competence in and adoption of HPC, HPDA, AI and ML technologies. We want you to gain added value through, i.e. increased efficiencies, quality of service, innovations and competitive advantages. 

Project example

When working with any type of serial workflow, like AI workloads, you often have a series of tasks that depends on the output of preceding tasks, for example pre-processing, learning and post-processing, with sequential dependencies.

When a workflow like this runs for days or weeks, you must reserve a computing resource that satisfies all the workflow requirements, like many CPU cores, a lot of memory and maybe a GPU. The expensive CPU, memory or GPU might be idling when the other resources are used. This increases your costs and reduces the ability to scale the workload. 

At the Norwegian Competence Centre for HPC we help customers split up the workflow into individual tasks, where the hardware requirements and task dependencies are defined at the scheduler level. This allows the CPU hungry tasks to be spread over more CPU resources to reduce the runtime, as well as ensuring that expensive hardware like GPU is not reserved and paid for while waiting for the pre-processing task to have the data available for the GPU. The same is also important when the GPU dependent task is running, where the tasks can be scaled over more GPUs for a shorter period, and the many CPU cores and a large amount of memory that were needed for the pre-processing can be released and reactivated when needed for post-processing.

This type of optimisation is not only applicable if adapting the workload to HPC or a supercomputer, but also applicable for increasing the efficiency of commercial cloud resources and reducing cost. Want to know more? Check https://lnkd.in/d6XG2JBT