Přístupnostní navigace
E-application
Search Search Close
Branch Details
Original title in Czech: Výpočetní technika a informatikaFITAbbreviation: DVI4Acad. year: 2015/2016
Programme: Computer Science and Engineering
Length of Study:
Profile
The goal of the doctoral study programme is to provide outstanding graduates from the MSc study programme with a specialised university education of the highest level in certain fields of information technology, including especially the areas of information systems, computer-based systems and computer networks, computer graphics and multimedia, and intelligent systems. The education obtained within this study programme also comprises a training and attestation for scientific work.
Key learning outcomes
Guarantor
prof. RNDr. Milan Češka, CSc.
Issued topics of Doctoral Study Program
Tutor: Zemčík Pavel, prof. Dr. Ing., dr. h. c.
The topic focuses algorithms of video processing. Its main goal is to research algorithms so that their features and application possibilities are better understood, so that they are deeply analyzed, so that they are improved or newly created, and so that they are efficiently implemented e.g. in CPU, in CPU with acceleration through SSE instructions, in embeded systems, in embedded systems with FPGA, or in other systems. The programming work is expected in C, C++, C#, assembly language, CUDA, OpenCl, or VHDL. The algorithms of interest include:
After mutual agreement, individually selected algorithms can be considered as well as soon as they do belong to the general topic.
Tutor: Matoušek Petr, doc. Ing., Ph.D., M.A.
The project is concerned with advanced methods of computational photography. The aim is to research new computational photography methods, which comprises software solutions potentially supported by new optics and/or hardware. Our interest is on HDR image and video processing, color-to-grayscale conversions, spectral imaging, and others.
Tutor: Čadík Martin, doc. Ing., Ph.D.
Tutor: Zbořil František, doc. Ing., Ph.D.
Tutor: Zendulka Jaroslav, doc. Ing., CSc.
Tutor: Herout Adam, prof. Ing., Ph.D.
Tutor: Hanáček Petr, doc. Dr. Ing.
The goal of the work is to research and create algorithms that will allow for running augmented reality on mobile (ultramobile) devices. It mainly concerns algorithms of pose estimation in the space by the means of computer vision and by using sensors embedded in the device. Furthermore, the work will elaborate on algorithms of rendering of virtual elements into the real-world scene and on applications of augmented reality on mobile devices.
Tutor: Vojnar Tomáš, prof. Ing., Ph.D.
Tutor: Dvořák Václav, prof. Ing., DrSc.
Tutor: Zbořil František, doc. Ing., CSc.
Tutor: Drahanský Martin, prof. Ing., Ph.D.
Modern operating systems (OS) must meet many requirements not only in terms of flexibility and efficiency of their execution on recent computing platforms, but also in terms of dependability of their kernels and services they provide to the application layer. The aim of the project is:
The project can be oriented into various directions, such as low-power applications / OS or application / OS designed to run in an embedded or a multi-core environment. During the project, a "conventional" OS such as Unix, Linux, Android, Windows, iOS or a specialized OS such as QNX, uC/OS-I (II, III), FreeRTOS, MQX can be utilized.
Tutor: Strnadel Josef, Ing., Ph.D.
Tutor: Švéda Miroslav, prof. Ing., CSc.
As you probably know, a compiler comprises from a language depending analytic part (front-end) which is producing a revised internal representation and a following part (back-end) which is generating a result object code or an assembler code. The methodology for the front-end is already solved from the practical point of view many years and it is thought on a bachelor level of study. There are some generators of the analytical compiler part which are mainly based on the LALR grammars. These ones are able to generate the analytical part and a context depending part is easy built according to a known methodology. For some branches we use moreover the C/C++ language only. The front-end of such compiler is freely down able on the net. When we sufficiently design the internal representation, the front-end generator is depending on the input language only. Such internal representations are existing. The definition language pro the generator is mainly an attribute grammar. Real practical problems occur in the back-end generation. An amount of processor architectures is high and moreover is permanently increasing in connection with mobile phones, and other embedded systems, Internet of Things, medical devices, automotive devices etc. One of the main reasons is a collision of using of available processors with already developed compilers and simultaneously large power consumption. These designed devices cannot use such processors by reasons of power consumption and often also of a license price. So, the number and variety of the processor architecture is quickly increasing. It will be sufficient to keep at disposition for the back-end a similar generating tool like for the front-end. In such way, we can to a great extend to speed-up a transport of applications (written mainly in the C language) on the new processors. For the sake of increasing of complexity of embedded systems and non-linear microprocessor architectures, we need high optimized compilers for decreasing of the complexity of such platforms and their power dissipation. An power consumption of actual microprocessors is solved by using of a dynamical power consumption which can be rapidly decreased by an elimination of memory approaches, minimization of performed cycles, and minimization of the triggering activities on buses. What we can link to? In a frame of the Lissom group research, a reconfigurable C compiler grew. It is now a part of the Codasip Studio of the Codasip Ltd. (www.codasip.com) and also for our university. The aim of this work is a critical evaluation of the present state of the research and methods and a design of an effective generation methodology and the generator with a focus to an effective power consumption. This work will solved optimization by a register assignment for non-regular architectures, and e.g. an after pass code optimizations for a minimization of the dynamic triggering on instruction memory busses. More information verbally.Possible engagement in paid activities: -basic stipendium of the PhD. program -an remuneration from grant projects -some part time job in the co-operating company -possible to pass an internship program Contact and information Prof. Ing. Tomáš Hruška, CSc. - hruska@fit.vutbr.cz
Tutor: Hruška Tomáš, prof. Ing., CSc.
Tutor: Kreslíková Jitka, doc. RNDr., CSc.
Tutor: Sekanina Lukáš, prof. Ing., Ph.D.
Tutor: Kotásek Zdeněk, doc. Ing., CSc.
Tutor: Meduna Alexandr, prof. RNDr., CSc.
Tutor: Smrž Pavel, doc. RNDr., Ph.D.
As you probably know, a compiler comprises from a language depending analytic part (front-end) which is producing a revised internal representation and a following part (back-end) which is generating a result object code or an assembler code. The methodology for the front-end is already solved from the practical point of view many years and it is thought on a bachelor level of study. There are some generators of the analytical compiler part which are mainly based on the LALR grammars. These ones are able to generate the analytical part and a context depending part is easy built according to a known methodology. For some branches we use moreover the C/C++ language only. The front-end of such compiler is freely down able on the net. When we sufficiently design the internal representation, the front-end generator is depending on the input language only. Such internal representations are existing. The definition language pro the generator is mainly an attribute grammar. Real practical problems occur in the back-end generation. An amount of processor architectures is high and moreover is permanently increasing in connection with mobile phones, and other embedded systems, Internet of Things, medical devices, automotive devices etc. One of the main reasons is a collision of using of available processors with already developed compilers and simultaneously large energy consumption. These designed devices cannot use such processors by reasons of energy consumption and often also of a license price. So, the number and variety of the processor architecture is quickly increasing. It will be sufficient to keep at disposition for the back-end a similar generating tool like for the front-end. In such way, we can to a great extend to speed-up a transport of applications (written mainly in the C language) on the new processors. The back-end generating is a relatively new topic which is not taught in the university curricula (at FIT as well).
What we should possess:
So, we want to generate a quick and effective compiler (currently for the C language only) for different processor architectures which will be described by the definition language.
What we can link to? In a frame of the Lissom group research, a reconfigurable C compiler grew. It is now a part of the Codasip Studio of the Codasip Ltd. (www.codasip.com) and also for our university. We have some experience with this research. It is sure, that we do a research on the world level which is practically requested and which is not definitely solved. There are definition languages, the internal representation, and the reconfigurable C compiler. We have developed a workable generator, but it is not sure, whether it is sufficiently effective for sufficient scale of architectures.
More information verbally.
Tutor: Kořenek Jan, doc. Ing., Ph.D.
The project deals with image and video quality assessment metrics (IQM). The aim is to explore new ways how to incorporate human visual system properties into IQM. In particular, we will consider perception of HDR images, and utilization of additional knowledge (in form of metadata, 3D information, etc.) about the tested scenes.
Tutor: Kočí Radek, Ing., Ph.D.
Currently, there are several well-established methods of distributed processing of BigData (e.g., batch processing by MapReduce, stream processing, various data distribution models for MPI, etc.). These methods utilize various patterns for data distribution and processing that affect design of BigData processing systems. Therefore, there is emerging need to apply, to combine, and to switch in runtime different data distribution and processing patterns in a BigData processing system without necessity to redesign or to rebuild the system. For example, we may need to optimize performance of a BigData processing system by making use of existing runtime monitoring data, which may eventually result into switching the data processing form often repeated batch processing to continuous stream processing. The objectives of this research are:
The Ph.D. student will co-operate with other doctoral students and employees of the research group of information and database systems.
The full-time Ph.D. student will be involved in teaching according to needs of the department and the faculty.
The supervisor-specialist (consultant): RNDr. Marek Rychly, Ph.D.
The topic focuses algorithms of computer graphics and generally computer image synthesis. Its main goal is to research algorithms so that their features and application possibilities are better understood, so that they are deeply analyzed, so that they are improved or newly created, and so that they are efficiently implemented e.g. in CPU, in CPU with acceleration through SSE instructions, in embeded systems, in embedded systems with FPGA, or in other systems. Algorithms of interest include 3D model processing and acquisition. The programming work is expected in C, C++, C#, assembly language, CUDA, OpenCl, or VHDL. The algorithms of interest include:
As you probably know, a compiler comprises from a language depending analytic part (front-end) which is producing a revised internal representation and a following part (back-end) which is generating a result object code or an assembler code. The methodology for the front-end is already solved from the practical point of view many years and it is thought on a bachelor level of study. There are some generators of the analytical compiler part which are mainly based on the LALR grammars. These ones are able to generate the analytical part and a context depending part is easy built according to a known methodology. For some branches we use moreover the C/C++ language only. The front-end of such compiler is freely down able on the net. When we sufficiently design the internal representation, the front-end generator is depending on the input language only. Such internal representations are existing. The definition language pro the generator is mainly an attribute grammar. Real practical problems occur in the back-end generation. An amount of processor architectures is high and moreover is permanently increasing in connection with mobile phones, and other embedded systems, Internet of Things, medical devices, automotive devices etc. One of the main reasons is a collision of using of available processors with already developed compilers and simultaneously large energy consumption. These designed devices cannot use such processors by reasons of energy consumption and often also of a license price. So, the number and variety of the processor architecture is quickly increasing. It will be sufficient to keep at disposition for the back-end a similar generating tool like for the front-end. In such way, we can to a great extend to speed-up a transport of applications (written mainly in the C language) on the new processors. The back-end generating is a relatively new topic which is not taught in the university curricula (at FIT as well). One of the back-end problems is a parallel architectures programming. It means that the compiler is constructed in such way to be able to use instructions for parallel processing. The suitable language for such systems programming is OpenCL. The OpenCL (the Open Computing Language) is an industrial standard for a parallel programming of heterogeneous computer systems, e.g. personal computers equipped by GRP (graphics), APU, possibly DSP (audio). What we can link to? In a frame of the Lissom group research, a reconfigurable C compiler grew. It is now a part of the Codasip Studio of the Codasip Ltd. (www.codasip.com) and also for our university. We have some experience with this research. It is sure, that we do a research on the world level which is practically requested and which is not definitely solved. There are definition languages, the internal representation, and the reconfigurable C compiler. We have developed a workable generator, but it is not sure, whether it is sufficiently effective for sufficient scale of architectures. The goal of this thesis is research and development of an OpenCL compiler that exploits paralellism specified in OpenCL and targets mainly architectures with SIMD instructions, and VLIW and multi-threaded architectures. More information verbally. Possible engagement in paid activities: -a basic stipendium of the PhD. program -an remuneration from grant projects -local -European -some part time job in the co-operating company -possible to pass an internship program Contact and information Prof. Ing. Tomáš Hruška, CSc. - hruska@fit.vutbr.cz
This subject comes from the longtime prof. Hruškas research in an area of process models which was performed together with PhD. students in the last years. A fundament of this research is a processes analysis with the target to optimize them. The research runs in two branches:
1. A workflow models, their analysis a a ways to the business process optimization
2. Process mining
The first one works with a classification of the proper workflow models, their standards a techniques of a discrete modelling. The goal is to find the workflow process modeling method with the usage modern programming languages, possibly with a possibility of an object-oriented approach application. Moreover is possible to optimize this method a compare it with existing standards and systems in the area of a workflow model description. This topic is solved in the frame of the TIP grant Workflow System as a powerful tools for Business process reengineering.
The second area of process mining is focused to a detection, analysis a optimization of business models based on dates for log files. This analysis is representing a currently missing connection between a classical business process analysis and a data mining.
The main goals are following:
Information systems are optimizing a process management by their definition, supplying of relevant information about a product an about new generated requests. We try to design and implement an information system for a modelling, management and following business process optimization. The goal is to automatize an information handover between particular sources and to make sure that business rules and security is kept. The resulting system will provide an effective information flow management.
Possible engagement in paid activities:
- a basic stipendium of the PhD. program
- an remuneration from grant projects
- local
- European
- some part time job in the co-operating company
- possible to pass an internship program
Contact and information
Prof. Ing. Tomáš Hruška, CSc. - hruska@fit.vutbr.cz
In the back-end, a static analysis is running during a compilation. It make possible to generate a currenty optimal code which is based on information available during the compilation. But this approach is not sufficient for some architectures. Especially in the case when the number of application program sis limited (a frequent situation for more processors), it is necessary to use for the optimization also data reached after the program execution (a profil). What we can link to? In a frame of the Lissom group research, a reconfigurable C compiler grew. It is now a part of the Codasip Studio of the Codasip Ltd. (www.codasip.com) and also for our university. We have some experience with this research. It is sure, that we do a research on the world level which is practically requested and which is not definitely solved. There are definition languages, the internal representation, and the reconfigurable C compiler. We have developed a workable generator, but it is not sure, whether it is sufficiently effective for sufficient scale of architectures.
Tutor: Černocký Jan, prof. Dr. Ing.
Multi-core solutions represent a trend in recent computational architectures and applications, primarily due to the power and thermal limitations of technology scaling. From an application perspective, it is necessary to leverage parallelism and to limit synchronization using appropriate scheduling mechanisms over available cores to meet application-specific constraints related e.g. to power consumption, quality of services they deliver, performance, dependability or timeliness of their reactions. The aim of the project is:
The project can be oriented into various directions such as multi-core system design optimization, autonomously adaptive / reconfigurable systems, task / communication scheduling or operating system architectures for multi-core systems. During the project, "conventional" multi-core platforms such as ARM Cortex-A9, Intel/AMD or specialized ones such as Xilinx Zynq 7000, Altera SoC FPGA or Microsemi Smartfusion2 can be utilized to check the proposed method in practice.
The project deals with geo-localization of mobile devices in unknown environments using computer vision and computer graphics methods. The aim is to investigate and develop new image registration techniques (with geo-localized image database or 3D terrain model). The goal is an efficient implementation of proposed methods on mobile devices as well as search for additional applications in the area of image processing, computational photography, and augmented reality.
Tutor: Janoušek Vladimír, doc. Ing., Ph.D.
Tutor: Růžička Richard, doc. Ing., Ph.D., MBA