Accepted Special Sessions and Workshops        


Index:

Special Sessions:

  • Special Session 01: Soft Computing Methods in Quantitative Management and Decision Making
    Florin Gheorghe Filip, Romanian Academy, Romania ( ffilip@acad.ro)
    Ioan Dzitac, Agora University of Oradea, Romania ( rector@univagora.ro)

    Hard computing is conventional computing and requires a precisely stated analytical model, but many analytical models are valid for ideal cases only. The real world problems exist in a non-ideal environment. The term Soft Computing was coined by Lotfi A. Zadeh in the early 90*s. In according with Zadeh*s definition, Soft Computing is based on Fuzzy Logic, Neural Networks, Support Vector Machines, Evolutionary Computation, Machine Learning and Probabilistic Reasoning. Soft computing can deal with ambiguous and noisy data. Soft computing is tolerant of imprecision, uncertainty, partial truth, and approximation. In effect, the role model for Soft Computing is the human mind. Artificial and Computational Intelligence based on soft computing provide the background for the development of smart management systems. Today, such intelligent systems may take many forms, encompass a variety of approaches and include many design challenges. The goal of this special session is to bring together researchers interested in applications of soft computing algorithms and procedures in quantitative management and decision making, in order to exchange ideas on problems, solutions, and to work together in a friendly environment. Topics of interest include, but are not limited to:

    *Ant colony optimization algorithms
    *Artificial intelligence methods for web mining
    *Computational intelligence methods for data mining
    *Decision support systems for quantitative management
    *Decision making with missing and/or uncertain data
    *Fuzzy and neuro-fuzzy modelling and simulation
    *Fuzzy-sets-based models in operation research
    *Knowledge Discovery in Databases
    *Machine learning for intelligent support of quantitative management
    *Neural networks in decision making tools
    *Smarter decisions


  • Special Session 02: Intelligent Decision Making and Consensus

    Enrique Herrera-Viedma, Granada University, Spain (viedma@decsai.ugr.es)
    Hamido Fujita, Iwate Prefectural University, Japan (HFujita-799@acm.org)
    Francisco Chiclana, De Montfort University, U.K. (chiclana@dmu.ac.uk)
    Francisco Javier Cabrerizo, UNED, Spain (cabrerizo@issi.uned.es)
    Ignacio Javier P谷rez, University of Cadiz (ignaciojavier.perez@uca.es)

    Intelligent decision making processes are developed by automatic decision-making systems that support individual or organisational decision making processes using different Information Technologies (as the Web and social networks) and Artificial Intelligence tools (as Computational Intelligence tools). The intelligent decision making processes involve the use of preference modelling and consensus processes. The preference modelling deals with the representation and modelled of the preferences provided by the experts in the problems. The fuzzy logic is a computational intelligence tool that provides an adequate framework to deal with the uncertainty presented in the user opinions. The fuzzy preference modelling has been satisfactory applied in intelligent decision making. On the other hand, consensus is an important area of research in intelligent decision making. Consensus is defined as a state of mutual agreement among members of a group where all opinions have been heard and addressed to the satisfaction of the group. A consensus reaching process is a dynamic and iterative process composed by several rounds where the experts express, discuss and modify their preferences.
    The objective of the proposed session is to highlight the ongoing research on intelligent decision making, fuzzy preference modelling and consensus processes under uncertainty. Focusing on theoretical issues and applications on various domains, ideas on how to solve consensus processes in intelligent decision making under fuzzy preference modelling, both in research and development and industrial applications, are welcome. Papers describing advanced prototypes, systems, tools and techniques and general survey papers indicating future directions are also encouraged. Topics appropriate for this special session include, but are not limited to:

    *Fuzzy preference modelling in intelligent decision making
    *Intelligent decision making system applications
    *Consensus in fuzzy multi-agent decision making
    *New models of fuzzy preference modelling
    *Intelligent decision making system for big data
    *Intelligent decision making in Web 2.0 frameworks
    *Intelligent decision making in presence of incomplete information
    *Aggregation of preferences
    *Intelligent decision making in dynamic contexts

  • Special Session 03: A New Energy Approach to Building and Managing Intelligent Sustainable Hybrid Energy Systems (SHES)
    Franco F. Yanine, Technical University Santa Mar赤a, Valpara赤so, Chile (fyanine@uc.cl)
    Felisa M. C車rdova, University of Santiago de Chile, Santiago, Chile (felisa.cordova@gmail.com)
    Ouadie Bennouna, ESIGELEC, IRSEEM, Rouen,France (bennouna@esigelec.fr)

    A new research on SHES approaches the intelligent microgrid concept differently, from a systematic and cybernetics standpoint, much like a complex living organism. The system comprises the smart microgrid which is to be designed and configured as a complex adaptive system (CAS), a living organism that is coupled with a sustainable block? (a group of energy consumers somewhere, whether residential or industrial/commercial) and the utility grid. The key to a building affordable, highly efficient, versatile and flexible SHES lies not in the microgrid performance alone or in a particular technology but in the interrelation and interaction (timely information being one of them) among the three systems involved: the smart microgrid, the sustainable block and the utility grid. Homeostasis means maintaining relatively stable internal conditions of a living organism despite continuous environmental changes. Homeostasis aims for a dynamic, adaptive, self-regulated steady-state, which is maintained by the contribution of all organ systems which comprise the living organism. Homeostatic control (HC) mechanisms involve a continuous monitoring and regulation of all the factors that can be changed (variables), along with communications necessary for the monitoring and regulation. Thus the homeostatic control system for microgrids is based on the nervous and endocrine systems actions, and emulates such systems and their operation. The HC system effectively regulates energy supply and demand, managing communications via set-points and specific command and functions of the algorithms behind the control mechanisms: receiver (sensor) 每 environmental 每 load monitoring responds to stimuli (e.g. something that causes changes in the controlled variables). The control center determines the set point which must remain variable - receives input from the receiver - determines appropriate responses. The effector receives output from the control center and provides the means to respond via power supply to the loads according to specific HC strategies. Here the effector responds, either by reducing (negative feedback) or increasing power supply or delivering credit to a consumer that has restrained himself/herself in energy consumption to allow others who consume more to have more energy available in the system for supply. Energy balance is tightly regulated by energy demand and energy expenditure versus power supply, which is critical for the individual consumer and for the community (whether it is a residential community or an industrial/commercial park) as a whole. Because changes in environmental conditions and sometimes in operations and load conditions are somewhat unpredictable, the HC system responsible for the regulation of energy intake, storage and expenditure in the microgrid system must be able to adapt quickly to such changes. Peripheral units such as battery bank, Fuel-Cell systems and microturbines, along with the central monitoring system are in constant cross-communication to ensure energy supplies for the different loads against periods of energy scarcity in the system (low levels of energy stored) or irregular, intermittent energy generation by the renewable energy sources. It is through the stored energy, and a growing energy efficiency and thriftiness on the part of consumers that the benefit for the whole occurs, especially during periods of scarcity due to low renewables production or dwindling power supply from the utility grid.
    The role of the homeostatic regulator? emulates the hypothalamus function in the human body. The hypothalamus is an area of the brain with high levels of plasticity and adaptability, and it is able to adapt quickly and very effectively to changes in the environment and to changes in other organs. Finally the role of the energy storage is vital to our model in that we postulate that for the system to achieve higher levels of energy efficiency and thriftiness, the size of the energy storage system must be augmented, oversized to the 50% installed capacity level, unlike the industry norm which ranges between 25% and 30%. This will create the sensation of having ※enough back-up§ just like the body builds fat deposits to feel safe and secure and people stack large amounts of non-perishable foods and beverages in the cellar. The benefit drawn from this augmented storage strategy built into the microgrid albeit its larger investment cost (50% of installed power plant capacity) is similar to the behavior observed in humans when they are able to restraint themselves in their consumption and to be thriftier and more efficient in the use of resources knowing that they have enough back-up to fall on to. It is the same feeling that you get when you know you have enough in the refrigerator or in the cellar, and therefore can afford to go without food for a while or to forget about meals and be less apprehensive about energy intake. There are several important functions and indices operating in the HC model for microgrid, such as the energy savings index, the exergy index which shows the available capacity of the system to supply energy and the quality of such energy in the system. There is also Grid_frac which tells the users and the microgrid remote operator how much energy is being drawn by each user from the grid; it is a measure of exergy and efficiency. This strategy allows the user to be thriftier and more efficient in spite of having more in storage. In summary, our model of energy homeostasis proposes that long-term higher levels of energy storage will trigger positive changes in behavior and thus will make the whole HC system of the microgrid more effective. This is quite similar to what occurs in the human body with signals such as insulin and leptin levels which influence the neuronal activity of central effector pathways that serve as controllers of energy balance. Because these hormones circulate at concentrations that are proportionate to fat mass (energy storage) and energy balance, a change in body fat (energy storage level) stored is sufficient to alter the delivery of these hormones to the brain inducing the central effector pathway responses that promote the return of adiposity (normal levels of energy intake or consumption) to its original value. The bottom line result with the HC approach for microgrids is that at least 30% savings to the consumer in his/her energy bill (compared to the no-project alternative) will be realized. This result stems from a double benefit: a very limited use of the utility grid, of not more than 10 % on average monthly (the rest is all provided by the microgrid), and the higher energy efficiency and thriftiness in energy supply and consumption associated with the HC and energy management strategies incorporated in the microgrid.

  • Special Session 04: Data Acquisition and Management for Traceability Analytics
    Homepage: http://idamta.pdwsn.com/

    Jing He, Victoria University, Australia (Jing.He@vu.edu.au)
    Bo Mao, Nanjing University of Finance and Economics, China (bo.mao@njue.edu.cn)
    Hai Liu, School of Computer, South China Normal University, China (liuhai@scnu.edu.cn)

    In the era of wireless technology, robotics, web service, there are many computing technologies being introduced. With the recent development and progress of IoT (Internet of Things), it is possible to get information about how a system is operation and its real-time status in details. For example, RFID can track the distribution of goods, different sensors can monitor the environment, and GPS can send the location and time back. Based on the information, we could have a log for the monitored system and implement the trace-ability analysis. Trace-ability is the ability to verify the history, location, or application of an item. It is especially critical for some industries such as food processing, logistics, supply chain and e-business. The two key technologies for the trace-ability analysis are data acquisition and management. In the age of cloud computing, they are two promising fields. Although there are several solutions already in place, many challenges remain to be investigated and tackled.
    The purpose of this special session is to not only discuss the existing topics in data acquisition and management for traceability analysis, but also focus on the new rapidly growing area from the integration of big data analytics and traceability analysis for significant mutual promotion. We intend to discuss the recent and significant developments in the general area and to promote cross-fertilization of techniques. The participants in this special session will benefit as they will learn the latest research results of data acquisition and management of IoT and big data analytics based trace-ability system, as well as the novel idea of merging them.
    The special session is interdisciplinary and provides a platform for researchers, industry practitioners and students from engineering, sociology, computer science, information systems share, exchange, learn, and develop new research results, concepts, ideas, principles, and methodologies, aiming to bridge the gaps between paradigms, encourage interdisciplinary collaborations, advance and deepen our understanding of IoT, big data analytics, traceability and the related data management method.
    There are two major topics of interest for this workshop: (1) Traceability data acquisition, (2) Data management and mining for the generated IoT data. Comprehensive tutorials and surveys are also expected. The general topics include, but are not limited to:

    Traceability Data Management
    * Visualization of IoT based Traceability system
    * Intelligent Data Fusion and Aggregation
    * Storage Management Technologies
    * Deep Learning
    * Big (Sensor) Data
    * Pattern Discovery
    * Multiple Representation Structure
    * Spatiotemporal Data Management and Analysis
    IoT based Traceability Data Acquisition
    * RFID Related Technologies
    * Wireless Sensor Network
    * Online Quality Estimation
    * Data Acquisition based on Smart Phones
    * User Analysis based on Social Network

    More specially, details about recommended topics include, but are not limited to, the following:

    * Advanced Cloud Computing Solutions for Traceability Systems
    * Agent-based approaches to Cloud Services for Traceability Systems
    * Self-Organizing Agents for Service Composition and Orchestration in Trace-ability Systems
    * Self-service cloud and self-optimization in Traceability Systems
    * Trust in Cloud computing for Traceability Systems
    * Trace-ability Systems related Workflow Design and Optimization
    * Emerging Areas of Trace-ability Applications in the frontier of web and cloud computing
    * Advanced Cloud Computing Solutions for Traceability Systems
    * Agent-based approaches to Cloud Services for Traceability Systems
    * Self-Organizing Agents for Service Composition and Orchestration in Traceability Systems
    * Self-service cloud and self-optimization in Traceability Systems
    * Cloud resource allocation approaches
    * Privacy Preserving in Cloud Computing for Traceability Systems
    * Trust in Cloud computing for Traceability Systems
    * Trace-ability Systems related Workflow Design and Optimization
    * Advanced IT Solutions for Traceability Systems
    * Agent-based approaches to ICT Services for Traceability Systems
    * Self-Organizing Agents for Service Composition and Orchestration in Traceability Systems
    * Self-service cloud and self-optimization in Traceability Systems
    * Information resource allocation approaches
    * Privacy Preserving for Traceability Systems
    * Trust in Cloud Computing for Traceability Systems
    * Trace-ability Systems related Workflow Design and Optimization
    * Emerging Areas of Traceability Applications in the frontier of web and cloud computing

  • Special Session 05: AHP/ANP applications I
    Roberto Camanho, SIDEC (rcamanho@sidec.com.br)

    Making good decisions is one of the most important things we can do; our organizations* survival and success and our own well-being depend on our decision-making ability. The Analytic Hierarchy Process (AHP) is a structured method for helping people deal with complex decisions. It provides a comprehensive and rational framework for structuring a problem, for representing and quantifying its elements, for relating those elements to overall goals, and for evaluating alternative courses of actions. Rather than prescribing a ※correct§ decision, the AHP helps people to determine the one based on mathematics and human psychology. It is used throughout the world in a wide variety of decision making situations, in the fields such as government, business, industry, healthcare, and education. Its most general framework called the Analytic Network Process (ANP) applies to decision making with dependence and feedback and with benefits, opportunities, costs and risks synthesized using strategic criteria with sensitivity applied in determining the stability of the outcome. The AHP/ANP use core mathematics to derive priorities by making comparisons using the idea of dominance both for criteria, subcriteria and alternatives. Topics of interest include, but are not limited to:

    * AHP Theory & Methodology
    * AHP/ANP Mixed Methods, Optimization and Application
    * Banking and Financial Applications
    * Civil and Urban Applications
    * Conflict Resolution
    * Corporate Social Responsibility
    * Disaster Management
    * Environmental Application
    * Fuzzy AHP Approach
    * Government & Politics
    * Human Resource Management
    * Industrial Engineering
    * Information System
    * Manufacturing
    * Marketing Applications
    * Medical and Health Applications
    * Neural Networks
    * Performance and Simulation
    * Quality and Safety
    * Risk Analysis
    * Strategic Applications
    * Supply Chain Management
    * Sustainability

  • Special Session 06: AHP/ANP applications II
    Valerio Salomon, UNESP (salomon@feg.unesp.br)

    Making good decisions is one of the most important things we can do; our organizations* survival and success and our own well-being depend on our decision-making ability. The Analytic Hierarchy Process (AHP) is a structured method for helping people deal with complex decisions. It provides a comprehensive and rational framework for structuring a problem, for representing and quantifying its elements, for relating those elements to overall goals, and for evaluating alternative courses of actions. Rather than prescribing a ※correct§ decision, the AHP helps people to determine the one based on mathematics and human psychology. It is used throughout the world in a wide variety of decision making situations, in the fields such as government, business, industry, healthcare, and education. Its most general framework called the Analytic Network Process (ANP) applies to decision making with dependence and feedback and with benefits, opportunities, costs and risks synthesized using strategic criteria with sensitivity applied in determining the stability of the outcome. The AHP/ANP use core mathematics to derive priorities by making comparisons using the idea of dominance both for criteria, subcriteria and alternatives. Topics of interest include, but are not limited to:

    * AHP Theory & Methodology
    * AHP/ANP Mixed Methods, Optimization and Application
    * Banking and Financial Applications
    * Civil and Urban Applications
    * Conflict Resolution
    * Corporate Social Responsibility
    * Disaster Management
    * Environmental Application
    * Fuzzy AHP Approach
    * Government & Politics
    * Human Resource Management
    * Industrial Engineering
    * Information System
    * Manufacturing
    * Marketing Applications
    * Medical and Health Applications
    * Neural Networks
    * Performance and Simulation
    * Quality and Safety
    * Risk Analysis
    * Strategic Applications
    * Supply Chain Management
    * Sustainability

  • Special Session 07: NEUROCOGNITIVE ENGINEERING AND NEUROMARKETING
    Felisa Cordova, University of Santiago de Chile (felisa.cordova@gmail.com)
    Juan Pablo Rodr赤guez, CEO eye on media, Chile (jprodriguez@eyeonmedia.net)
    Hern芍n D赤az, University of Santiago de Chile
    Robertino Pereira, CEO eye on media, Colombia
    Ana Titos, University of Granada, Spain

    Human decision making systems can depend on many factors, some of them are very rooted in ancestral phylogeny and some others are the result of our present history life and depend on our trained or dynamic changing preferences.
    Trying to engineer on neurocognitive processes assumes the knowledge about those elements or components upon we want to (re)engineer. Until now many of these components has been revealed thanks to the new technology involving brain stimulation and scanning, functional brain images, image analysis and brain lesions.
    One of the spin-off consequences of the development of neuroeconomy, the neurobiology of decision making, was neuromarketing, the use of electrophysiological devices to capture human physiological activity during buying decisions to learn about their preferences, probabilities of choice and neural processes involved. Until know it has been established that few seconds before a risky decision specific nuclei of the brain start evaluating actual conditions until surpass a threshold since where it is possible to predict the following output.
    We present here a research joint venture that associate neurocognitive research of human behavior with neuromarketing empirical findings on decision making. The first objective of this enterprise is to develop novel and diverse ways to analyze, visualize and interpret human physiological data with the purpose to characterize functional processes of the brain at different timescales during performing different tasks.
    While the hitherto framework of neuromarketing has been the stimulus-response paradigm, the neurocognitive engineering approach is in search of answers in the mid- and long term behavioral change timescale. It means that is deeply interested in process like teaching and learning as central processes and procedures of human communication, education and culture.

  • Special Session 08: Advances in Computational Intelligence
    Maria Augusta Soares Machado (mmachado@ibmecrj.br, fuzzyconsultoria@hotmail.com)

    The Computational Intelligence (CI) field encompasses the study of neural networks, fuzzy logic, evolutionary and nature-inspired computing, and machine learning. Its most striking benefits are usually related to problems for which no satisfactory solution could be found by directly using "traditional" paradigms, although CI's methods are (and have to be) also based upon rigorous and firmly established mathematical results. Plenty of examples of very successful applications can be found, for instance, in the fields of stochastic global optimization and pattern recognition, to cite a few. In this fashion, its scope includes problems related to logic, reasoning, planning, natural language understanding, rule based machine learning, business, finance, commerce, marketing, economics, decision making, data mining, fuzzy inference systems, neural networks, neural pattern recognition, clustering, genetic algorithms, probabilistic and possibilistic reasoning and all related machine learning methods. Topics of interest include, but are not limited to:

    * Soft computing methods and applications related to big data
    * Pattern recognition, Optimization and Application
    * Signal processing
    * Problems related to the higher cognitive functions
    * Applications of Choquet integral
    * Iphone applications aimed at decision making
    * Simulated Annealing
    * Applications of Fuzzy Logic

  • Special Session 09: IT applications to City Logistics, Urban Logistics and Reverse Logistics
    Fernando Augusto Silva Marins, São Paulo State University, Brazil (fmarins@feg.unesp.br)
    Aneirson Francisco da Silva, São Paulo State University, Brazil
    Jos谷 Roberto Dale Luche, São Paulo State University, Brazil
    Reinaldo Fagundes dos Santos, FATEC 每 São Jose dos Campos, Brazil

    City Logistics, Urban Logistics and Reverse Logistics have become a relevant topic in current research agenda. The goal is to simultaneously take into account the triple bottom line concept within the decision making process to face the enormous and relevant challenges involving these themes. The combined use of optimization methods and IT resources has increasingly attracted the interest of the researchers for solving the related problems.
    This special session is interested in attracting high quality research papers addressing issues in City Logistics, Urban Logistics and Reverse Logistics, including sustainability aspects. The focus is on new developments, concepts, practices and research opportunities in decision-making models for City Logistics, Urban Logistics and Reverse Logistics systems. Comprehensive, and integrative reviews of these themes are also welcomed. Topics of interest include, but are not limited to:

    * IT and Multi-criteria decision making for City Logistics, Urban Logistics and Reverse Logistics
    * IT and Optimization techniques applied to in City Logistics, Urban Logistics and Reverse Logistics problems- IT and Simulation-optimization applied to in City Logistics, Urban Logistics and Reverse Logistics problems
    * Minimization of congestion, pollution, waste, and alternative fuels involved with City Logistics, Urban Logistics and Reverse Logistics problems
    * IT and Management System applied to in City Logistics, Urban Logistics and Reverse Logistics problems
    * IT and Big Data Management applied to in City Logistics, Urban Logistics and Reverse Logistics problems
    * Sustainability in reverse logistics and supply chain management
    * Green Operations Management related to City Logistics, Urban Logistics and Reverse Logistics
    * Sustainable transportation, Green and sustainable vehicle routing

  • Special Session 10: Applications of Multi-Criteria Decision Analysis in Quantitative Management
    Luis Alberto Duncan Rangel, Fluminense Federal University (duncan@metal.eeimvr.uff.br)
    Luiz Flavio Autran Monteiro Gomes (autran@ibmecrj.br)

    Multi-Criteria Decision Analysis (MCDA) is the domain of Management Science that covers the use of analytical principles and procedures for structuring, analyzing and solving decision problems under multiple criteria. There are two major streams of methods inside of MCDA: quantitative approaches, such as 每 just to give a few examples - AHP/ANP and ELECTRE methods, the Dominance-based Rough Set Approach, and Goal Programming methods; and Verbal Decision Analysis approaches, such as DEX, ARAMIS, or the MASKA methods. Since both streams are essentially founded on quantitative concepts and constructs, both of them belong to the realm of Quantitative Management. In particular, many among those methods are also considered to fit into what we have named Computational Intelligence or Soft Computing. In this Special Section cases of application of methods of MCDA are presented; their limitations are discussed and improvements in their applicability are suggested. Topics of interest include, but are not limited to:

    * Methods that belong to the ELECTRE and PROM谷TH谷E families
    * Methods founded on building a Multi-Attribute Utility function
    * Methods of Verbal Decision Analysis
    * MultiCriteria methods dealing with hybrid data as inputs
    * Distance-based consensus methods
    * Multiobjective Combinatorial Programming
    * MACBETH method
    * Dominance-based Rough Set Theory
    * Multi-Criteria methods that take into consideration interactions between criteria
    * Multi-Criteria Classification and Sorting methods

  • Special Session 11: Long-Term Electricity Forecasting 每 Management and Support in Decision Making
    Reinaldo Castro Souza, Pontifical Catholic University of Rio de Janeiro, Brazil (reinaldo@ele.puc-rio.br)
    Fernando Cyrino, Pontifical Catholic University of Rio de Janeiro, Brazil (cyrino@puc-rio.br)

    Electricity is fundamental to the realization of productive activities in a country or region. In order to achieve levels of development compatible to the country*s dimensions and needs, it is essential the presence of secure and sufficient electricity supply.
    In this context, long-term annual forecasts of supply and consumption of electricity in a country or region, and of electricity prices, are of great importance for the decision making of companies and entities of the electrical sector. Those forecasts play an influential part in the planning of companies and of the electrical sector. Energy is influenced by several exogenous variables 每 e.g. related to climate, economy, society, and politics. Making forecasts of such variables in a long-term horizon implies high uncertainties. In this context, it is of great interest the provision of options of results based on case studies that make possible a better management of decision to be made considering different aspects that might happen in the future.
    The objective of this special session is to present research of present day and future tendencies of long-term forecasting of electricity demand as a tool to support management and decision making related to electrical sector. It is also an objective of this special session to provide discussions of experiences, of obtained results, and of problems faced by institutions that deal with this subject. It will be focus of this session theoretical issues and application of methods 每 in research, in development, or already implemented. Articles describing systems, tools and techniques are of interest, such as articles that point to future directions. The topics of this special session include but are not limited to:

    * Usage of scenarios to the long-term forecasts of electricity
    * Modelling techniques
    * Employment of climatic and economic variables in the modelling of long-term forecasts for electricity
    * Top-down and bottom-up approaches
    * Electricity demand forecasts divided by energy consumption classes: residential, industrial, tertiary, rural and others;
    * Influence of energy efficiency measures in electricity demand;
    * Modelling energy efficiency in long-term forecasts;
    * Impact of electricity tariffs in the demand;
    * New technologies and their impact in the demand;
    * Environment vs energy resources 每 influence in the demand of electricity;
    * Behavior of electricity demand hourly curves in relation to new Technologies and tariffs.

  • Special Session 12: The Dominance-based Rough Set Approach in Quantitative Management
    Ayrton Benedito Gaia do Couto, The Brazilian Development Bank (BNDES), Brazil (ayrtoncouto@gmail.com)

    Given the wide range of technologies and methods for data recovery, processing and exploration from various sources (databases, spreadsheets, documents, web, etc.) and forms (structured and unstructured data, images, etc.), the use of tools and methods in information management and decision making aiding have become necessary and indispensable within corporate and academic environments.
    In this context, through ITQM, participants can discuss the state-of-art and state-of-the-practice methods for pattern recognition and relationships extraction tomulticriteria decision making aiding and, in particular, with respect to the Dominance-based Rough Set Approach.
    Topics of interest include, but are not limited to:

    * Artificial Intelligence relating to knowledge and data management
    * Big data and applications
    * Data engineering
    * Data mining
    * Decision support systems
    * Decision making with missing, uncertain or inconsistent data Expert systems
    * Fuzzy logic
    * Knowledge discovery in databases Machine learning
    * Multicriteria decision aiding Neural networks in decision making tools
    * Preference modeling
    * Reasoning and pattern recognition
    * Social networks and social media

  • Special Session 13: Knowledge Discovery Meets Multicriteria Analysis
    Harold Paredes-Frigolett, Universidad Diego Portales, Santiago, Chile (harold.paredes@udp.cl)

    Knowledge discovery, broadly defined as the problem of building large knowledge bases (knowledge acquisition) and extracting knowledge from them for reasoning and theory building (knowledge extraction), has traditionally been dealt with in the field of knowledge-based systems by choosing logical representation languages based on subsets of first-order predicate logic and by applying reasoning algorithms based on logics for knowledge representation and knowledge extraction. The mainstream approach to building large knowledge representation systems has traditionally consisted in reducing the expressiveness of the underlying knowledge representation logics so as to be able to implement efficient algorithms for rule extraction and reasoning. This has remained the mainstream approach to knowledge discovery since the publication of the influential article by Levesque and Brachman in 1987 due to the so-called expressiveness trade-off of knowledge representation. Unfortunately, even for highly restricted subsets of first-order predicate logic, the reasoning algorithms used for knowledge extraction from large knowledge bases become quite intractable as the number of rules in the knowledge base grows. This trend towards restricting expressiveness has also been one of the main impediments to building commercial-grade knowledge-based systems in a variety of application domains.
    In this session, we seek contributions to knowledge discovery that depart from the traditional logical approaches in the Artificial Intelligence tradition. Contributions describing how multicriteria analysis can be used: (i) to acquire knowledge rules from semi- and non-structured sources such as the Web and (ii) to extract relevant rules from large knowledge bases for reasoning in commercial-grade knowledge-based systems are being sought for this special session. Contributions should depart from conventional approaches that rely on heavy-duty inferential processes in a knowledge base and should explore the use of multicriteria decision analysis algorithms for acquiring and extracting relevant rules from large knowledge bases. Articles reporting applications of multicriteria analysis in this area and theoretical articles showing how multicriteria analysis can be used to deal with the expressiveness trade-off of knowledge representation are specially encouraged.

  • Special Session 14: Applications and Software in Verbal Decision Analysis
    Placido Rogerio Pinheiro, University of Fortaleza, Brazil (placido@unifor.br)
    Maria Elizabeth Sucupira Furtado, University of Fortaleza, Brazil (elizabet@unifor.br)

    The Verbal Decision Analysis (VDA) is a methodological approach of Multiple Criteria Decision Analysis (MCDA) that supports problems solving in a verbal way. The solution of real world problems must be analysed from the perspective of user preferences. A user, who can be any individual (as a client, a stakeholder, a manager, etc.) involved in a decision making process, has several needs, requirements to attain, rules to obey, etc. So, a process of elicitation of user preferences can be complex, especially when a large amount of criteria and values of criteria are involved. This process is costly and requires a considerable time from the decision maker to make the appropriate questions to the several users regarding these criteria, and to analyze the answers. These methodologies must be applied for the analysis of real world problems, since the restructuration of user preferences and the classification of the best ones can be clearer and more understandable from the decision maker's point of view. This necessity leads researchers to create more robust procedures to be applied to large scale problems, reducing the notable impact on the methods' complexity caused by the consideration of a great amount of criteria of a determined problem. Recent researches in the area have shown that the hybridization of methods is able to overcome the limitation presented by the methods when they are applied separately.
    The goal of this section is to present research hybrid methodologies structured on VDA, which allow the resolution of large scale problems in Computer Science and Engineering. Typical, but not exclusive, topics of interest are:

    * Selection of Project Management
    * Software: Practice and Experience
    * Application in Industrial Engineering
    * Complex Project Planning
    * Portfolio Management Process
    * Investment in Decision Making
    * Engineering and Business Management
    * Health Care Coverage Decisions
    * Evaluation of User Interfaces
    * Analysis of alternative solutions for interaction design

  • Special Session 15: Big Data analytics for Smarter Commerce
    Svetlana Maltseva, National Research University Higher School of Economics, Moscow, Russia (smaltseva@hse.ru)
    Andrey Dmitriev, National Research University Higher School of Economics, Moscow, Russia (admitriev@hse.ru)
    Mikhail Komarov, National Research University Higher School of Economics, Moscow, Russia (mkomarov@hse.ru)

    Session deals with Smart Commerce as a concept, providing for the implementation of the customer centricity paradigm in the changing digital ecosystem of business. Customer centric approach is based on Customer transformation. A new breed of customer is dictating a new set of terms in the dynamic between buyers and sellers. Customers approach a sale empowered by technology and transparency, with more extensive information from more sources than ever before. They expect to engage with companies when and how they want, in person, online and on the go. And they want these methods to tie together seamlessly.
    Despite the benefits of moving to the customer centricity paradigm, the practical implementation of the Customer centric strategy requires major changes in the enterprise structure, resources, and activities.
    Critical for the implementation of this concept is the use of various sources of data about the user. These data arise in consumer interaction directly with business and stored with CRM-systems, and also come from external sources, such as social networks, sensors, intelligent equipment. Smarter commerce turns customer insight into action, enabling new business processes that help companies buy, market, sell and service their products and services.
    Big Data technology allows not only to carry out this analysis in real time, but also allows to create a new, more accurate methods and tools for consumer preference measurement, product design and positioning, brand equity assessment, pricing research. These technologies also allow to create a new system for monitoring and measuring of the enterprise performance.
    The most common approaches to the analysis of the information applicable to Big Data analysis are: association rule learning, classification tree analysis, genetic algorithms, machine learning, regression analysis, sentiment analysis, social network analysis, time series analysis, spatial analysis.
    Session is focused on the tasks of Smart Commerce, which define the restrictions and requirements for using Big Data methods and tools as well as highlight the new challenges to them.
    Topics of interest include, but are not limited to:

    * Big Data analytics for business applications
    * Data and knowledge integration, management and mining
    * Real-time Data analysis in Smarter Commerce
    * Customer centric process design
    * Customer experience management
    * Customer value analysis
    * Value chain transformation
    * Big Data driven marketing analytics
    * Marketing metrics and Smarter Commerce metrics
    * Recommender systems
    * Performance metrics in Smarter Commerce
    * Smart measurements and monitoring
    * Data-driven Risk and Revenue Management
    * Agent oriented analysis in social networks

  • Special Session 16: Quantitative Management to improve control and plans projects, programs and portfolios: demystifying statistical tools
    Fabio Reginaldo, IBMEC, Quode Project, International Institute of Learning (fabioreginaldo@yahoo.com)

    Sometimes, the project management involves negative results, scope creep, late schedules, cost overrun, poor communication, and many other problems. Actually, some manuals tools used to project control are insufficient to reach a good quality of management and high maturity. The quantitative management is not new in academic environment, but still finds some difficult to be used in business environment.
    Quantitative management of the project is able to provide, through the analysis of data obtained in measurements, an objective view of the project and the processes used in it. It thus enables the better understanding of the status and progress of the project, its performance variations and quality and the degree of achievement of project objectives and the organization. This is possible because the quantitative management provides means to establish and maintain stable levels of process variation, allowing predict future outcomes [FLORAC and CARLETON, 1999]
    Despite the Quantitative Management and Multi-Criteria Decision Analysis already being widely used by some large companies as important tools to improve processes and decision making, there is still great opportunity to encourage the use of these tools, which needs sharing good experiences and knowledge. As an example of this opportunity: in the Information Technology sector, there are widely known maturity models such as CMMI-SEI, and the MPS.BR governed by SOFTEX Brazil; These models predict that companies need to apply quantitative management to achieve certain level of maturity, usually a high level. Thousands of existing companies, just in Brazil, only a small part have such a level with application of quantitative management.
    This special session aims to consolidate and discuss articles that show results of the application of quantitative management for projects, programs, portfolios, or process improvements.
    Topics of interest include, but are not limited to:

    * Multi-criteria decision aiding applying to projects, programs and portfolios
    * Statistical tools to improve process
    * Statistical tools and Simulation to Schedule Simulations
    * Statistical tools and Simulation to Risk Analysis
    * Statistical tools and Simulation to solve problems in teams
    * Software Estimation
    * Quality Function Deployment
    * Maturity Models, CMMI, MPS.BR, OPM3, P3M, and others
    * Multi-criteria decision aiding applying to select projects in portfolio analysis
    * Multi-criteria decision aiding applying to balance projects in portfolio analysis

  • Special Session 17: DEA and MCDA in Sports Management and Evaluation
    João Carlos Soares de Mello (jcsmello@producao.uff.br)
    Lidia Angulo Meza (lidia_a_meza@pq.cnpq.br)

    Recently, there has been an increasing interest in applications of operational research in sports. One of the main topics in that field is sport management including sport evaluation and alternative rankings. In this field two tools have been widely used: Data Envelopment Analysis and Multicriteria Decision Analysis. Papers dealing with those two tools in sports are welcome in this special session. The term sport is interpreted liberally here and includes: games and pastimes, gambling, lotteries, and general fitness and health-related activities.
    Topics of interest include, but are not limited to:

    * Competitive strategy;
    * Match outcome models;
    * Decision support systems;
    * Analysis of sporting technologies;
    * Analysis of rules and adjudication; performance measures and models;
    * Optimisation of sports performance;
    * Financial valuation in sport;
    * Large sport events organization and management.

  • Special Session 18: Information and Technology Governance
    Carlos Francisco Simões Gomes, Fluminense Federal University (cfsg1@bol.com.br)

    Governance refers to "all processes of governing, whether undertaken by a government, market or network, formal or informal organization or territory and whether through laws, norms, power or language. It relates to the processes of interaction and decision-making among the actors involved in a collective problem that lead to the creation, reinforcement, or reproduction of social norms and institutions. Information and technology (IT) governance is a subset discipline of corporate governance, focused on information and technology (IT) and its performance and risk management.
    Topics of interest include, but are not limited to:

    * COBIT - Control Objectives for Information and related Technology
    * ITIL - O ITIL™ (Information Technology Infrastructure Library)
    * CMMI - Capability Maturity Model - Integration
    * Supply chain Management and IT

  • Special Session 19: Process Improvement Quantitative Tools
    Annibal Parracho Sant*Anna (annibal.parracho@gmail.com)
    Biographical note: Annibal Parracho Sant'Anna is Ph. D. (1977) in Statistics by the University of California, Berkeley in 1977 and M. Sc. (1970) in Mathematics by IMPA/CNPq. He graduated in Mathematics and in Economics at Universidade Federal do Rio de Janeiro. He has been Head of the Institute of Mathematics of Universidade Federal do Rio de Janeiro and President of the Brazilian Operational Research Society.

    Different aspects of process improvement rely on quantitative tools. Assessment of risks, measurement of the reliability of systems, prioritization of failures corrections, are some of such aspects. Composition of Probabilistic Preferences is a methodology that, by taking into account the subjectivity involved in the combination of multiple criteria, may be specially useful in each of these aspects of the management of processes improvement. Other ranking and classification tools employed in similar tasks are the subject of recent research and source of new results that may also be brought into discussion in this Section.
    Topics of interest include, but are not limited to:

    Composition of Probabilistic Preferences, Risk Assessment, Reliability Evaluation, Failure Modes and Effects Analysis, Classification and Benchmarking and every development in multi-criteria analysis related to IT-enabled quantitative management.

  • Special Session 20: Intelligent Transportation Systems and the Olympics 每 2016
    Paulo Cezar M Ribeiro, Federal University of Rio de Janeiro, Brazil (pribeiro@pet.coppe.ufj.br)

    The use of Intelligent Transportation Systems (ITS) in urban areas has become an essential tool for both traffic and transit management. The use of gathered traffic and transit information and data processing, plays a very important role in the improvement of the performance of public transportation systems, maximize traffic capacity and mitigate traffic congestion, maximizing the available capacity and efficiency of those systems. In congested cities, like Rio de Janeiro, city of the Olympics - 2016, it will be necessary the improvement the efficiency of the public transportation systems and to avoid/mitigate traffic congestion.
    This session will be based in the use of ITS in the Olympics infrastructure. The city of Rio de Janeiro is already using, intensively, ITS systems. Other cities will be encouraged to present their solutions for the transportation systems, based on ITS. More specifically, in this session it will be demonstrated the ITS systems and other solutions, already in operation and under development, that will be used in the Olympics - 2016.
    Main areas and research:

    * Area Traffic Control
    * Traffic Surveillance and Operational Control Center
    * Travel time prediction and reliability
    * Automatic Payments
    * ITS and the BRT*s systems

  • Special Session 21: Crisis, Risk and Business Continuity Management
    Denise Faertes, Petrobras/E&P/ENGP/OPM (faertes@petrobras.com.br)

    This session will present an overview of initiatives on Brazilian industry, related to Crisis, Risk and Business Continuity programs, considering the relevance of the implementation of reliability and risk techniques and models. In order to face the threats presently imposed to organizations and to be in compliance with new concepts of holistic management and resilience, reliability and risk assessment methodologies, as well as the implementation of integrated operation concepts (that involves the use of collaborative technologies, and multidiscipline work processes) compose powerful tools to support the establishment of those programs. Through the application of related processes, technologies and models, it is possible to identify potential threats to an organization and their associated impacts to business operations, and to provide a framework for building organizational capability for an effective response that safeguards the interests of its key stakeholders, reputation, brand and value creating activities. Those issues involve managing the recovery of business activities in the event of a business disruption, and management of the overall program through training, exercises and reviews, to ensure that business continuity plans stay current and up-to-date. A discussion may be promoted, hoping to contribute to a wider comprehension of the importance of those topics, when thinking about production assurance and prompt response.

  • Special Session 22: Using Information Technologies to Create Value for Customers
    Priscilla Yung Medeiros, IBMEC-RJ, Brazil (priscilla.medeiros@ibmecrj.br)

    New technologies create value for both customers and firms by changing the way they collaborate and communicate with one another. The opportunities and challenges offered by new technologies create new areas of marketing research. For instance, more and more marketers deal with the analysis of big data on an everyday basis. It is therefore essential to create effective ways of using big data to inform marketing decisions. It has also become critical for businesses to use information technology in an efficient way to deliver better services to its own personnel and external clients. IT also enhances the way firms can communicate with its customers and create value.
    Main areas and research:

    * Analyzing Big Data and Using it Effectively to Inform Marketing Decisions
    * Using IT to Offer Better Services to Customers
    * Using IT to Enhance Communication with Customers
    * Creating Value via Technology and Social Media

    Coordinator of the session: Prof. Priscilla Yung Medeiros, Ph.D. Kellogg School of Management, professor at IBMEC-RJ, researcher in the Marketing area.

    Workshops:

  • Workshop 01: SMART CITIES: THE ROLES OF IT AND QUANTITATIVE MANAGEMENT
    Raul Colcher, Questera Consulting, Brazil, (raul.colcher@questera.com)
    Luiz Flavio Autran Monteiro Gomes, IBMEC, Brazil (autran@ibmecrj.br)


    Raul Colcher, Questera Consulting, Brazil, (raul.colcher@questera.com) Luiz Flavio Autran Monteiro Gomes, IBMEC, Brazil (autran@ibmecrj.br)

    The increasing emphasis placed on technologies applicable to urban processes and systems, often grouped under the label ※smart cities§, represents an invitation and a real challenge for the development of IT and quantitative management tools. Be it in a context of energy efficiency, housing and urbanism, mobility, urban infrastructure, environment and quality of life or citizen*s access to urban information and governance, modern cities urgently need tools for better operation and management. These are complex systems, for which the traditional qualitative approaches of policy formulation and administration seem to have reached a point of exhaustion. Some of the answers to this challenge may come from emerging IT developments (e.g. Big Data, Analytics, Cloud Computing, and the Internet of Things 每 IoT), while others will derive from specific research efforts in the field. Integration, standardization and technology management/governance issues will probably also need to be adressed. In order to promote awareness and collaboration for the development of research in this important field, we will have a special workshop dedicated to ※Smart Cities: The Roles of IT and Quantitative Management§ under ITQM 2015 (http://www.itqm-meeting.org/2015/). The main purpose of this workshop is to provide researchers and practitioners the opportunity to share the most recent advances in the area, to generate new results in this relatively under-researched field, and to determine directions for further research. The workshop is focused on topics related to all aspects of IT and quantitative management applicable to urban processes and systems.
    Topics of interest include, but are not limited to, the following:

    * Modelling and simulation for urban processes.
    * IT and business architectures in the city management context.
    * Big data and Analytics applied to urban systems.
    * Cloud computing for urban systems.
    * Advances in machine learning, computer vision and other AI technologies for application to urban processes.
    * Social networks and their appropriation by urban processes and systems.
    * Parallel processing and Grid for urban systems.
    * Advances in Geographical Information Systems for the urban context.
    * Sensor networks and IoT developments with application to urban systems.
    * Energy efficiency in the urban context.
    * Urban mobility and traffic engineering models.
    * Intelligent Transportation Systems (ITS).
    * Environmental management in the urban context, sustainable cities.
    * Water management processes and systems.
    * Urban waste management.
    * Intelligent buildings.
    * Security, safety and privacy in cities.
    * Health Management systems
    * Infrastructure and public asset management in the urban environment.
    * Collaboration and citizen access to urban information and services.
    * Integration and standardization for urban processes and systems.

    Original papers are invited from prospective authors with interest on the related areas. Submitted papers must not substantially overlap papers that have been published or that are simultaneously submitted to a journal or a conference with proceedings. Papers should be at most 8 pages including the bibliography and well-marked appendices. Papers must be received by the submission deadline.

  • Workshop 02: Risk Correlation Analysis and Risk Measurement
    Jianping Li, Institute of Policy & Management, Chinese Academy of Sciences, China (ljp@casipm.ac.cn)
    Yi Peng, University of Electronic Science and Technology of China, China ( pengyicd@gmail.com)
    Xiaodong Lin, Rutgers University, USA (lin@business.rutgers.edu)
    Rongda Chen, Zhejiang University of Finance & Economics (rongdachen@163.com)

    The analysis of inter-risk correlation and risk aggregation is an important factor to risk measurement, such as the interaction of market risk, credit risk and operational risk. Correlation analysis and risk measurement can be viewed as a Multiple Criteria Decision Making problem in a certain extent, which is the trade-off among different aspects, such as the ※project triangle§ㄗcost, quality and scheduleㄘ. Some mathematical models such as Copula models are used for measuring risk correlation, but risk management must extend far beyond the use of standard measurement in practical operations and applications. An important aspect is to emphasize on the correlation analysis of risks and thus effectively measure all kinds of financial risks.
    In order to promote the development of risk correlation and measurement, we organize a special workshop dedicated to the topic of ※risk correlation analysis and risk measurement§ under ITQM 2015 (http://www.itqm-meeting.org/2015/). The main purpose of this workshop is to provide researchers and practitioners an opportunity to share the most recent advances in the area of risk correlation and measurement, to assess the state of knowledge of risk correlation and measurement, to generate new results in this relatively under-researched area, and determine directions for further research, Papers should present modeling approaches/perspectives to risk correlation and measurement. The workshop is interested in topics related to all aspects of risk correlation and measurement.
    Topics of interest include, but are not limited to, the following:

    * Foundation of risk correlation and dependency
    * Correlation analysis of financial risks
    * Correlation analysis of software risks
    * Correlation analysis of project risks
    * Risk correlation modeling
    * Risk analysis by multiple criteria
    * Risk integrated management and risk correlation
    * Credit scoring, Credit rating
    * Portfolio management
    * New techniques to risk measurement

    Original papers are invited from prospective authors with interest on the related areas. Submitted papers must not substantially overlap papers that have been published or that are simultaneously submitted to a journal or a conference with proceedings. Papers should be at most 8 pages including the bibliography and well-marked appendices. Papers must be received by the submission deadline. We invite you to submit your paper to: EasyChair Login Page for ITQM 2015, RCARM2015 Workshop.

  • Workshop 03: Intelligent Decision Making and Extenics based Innovation
    Xingsen Li, NIT, Zhejiang University, China (lixs@nit.zju.edu.cn)
    Chunyan Yang, Guangdong University of Technology, China (wyw@gdut.edu.cn)
    Haolan Zhang, NIT, Zhejiang University, China (haolan.zhang@nit.zju.edu.cn)
    Yanbin Liu, NIT, Zhejiang University, China (lyb.nbt@gmail.com)

    With the rapid development of information technology, knowledge acquisition through data mining becomes one of the most important directions of scientific decision-making; however, Utilizing computer and Internet to solve contradictory problems and carry out exploration and innovation is still an ideal for human beings. Extenics is a new inter-discipline of mathematics, information, philosophy and engineering including Extension theory, extension innovation methods and extension engineering. It is dedicated to exploring the theory and methods of solving contradictory problems uses formalized models to explore the possibility of extension and transformation of things and solve contradictory problems intelligently. The intelligent methods aim to provide targeted decision-making on the transformation of the practice which is facing the challenges of data explosion. Artificial intelligence and intelligent systems offer efficient mechanisms that can significantly improve decision-making quality. Through ITQM, participants can further discuss the state-of-art technology in the Intelligent Decision Making and Extenics based Innovation field as well as the problems or issues occurred during their research.
    The topics and areas include, but not limited to:

    * Intelligent Information Management and Problem Solving
    * Knowledge Mining on E-business
    * Intelligent Systems and its Applications
    * Intelligent Logistics Management and Web of Things
    * Web Marketing and CRM
    * Intelligent Data Analysis and Financial Management
    * Intelligent technology and Tourism Management
    * Innovation theory and Methods
    * Extenics based Applications
    * Extension data mining and its Applications
    * Web Intelligence and Innovation
    * Knowledge based Systems and decision-making theory

  • Workshop 04: High Performance Data Analysis
    Vassil Alexandrov, ICREA Research Professor in Computational Science at Barcelona Supercomputing Centre, Spain (vassil.alexandrov@bsc.es)
    Ying Liu, University of Chinese Academy of Sciences, China (yingliu@ucas.ac.cn)

    Big data is an emerging and active research topic in recent years. There is a clear need to analyze huge amounts of unstructured and structured complex data, historic data as well as data coming from real time feeds (e.g. Business data, meteorological ones from sensors, etc). This is beyond the capability of traditional data processing techniques and tools. The challenges include data capture, storage, search, sharing, transfer, analysis, and visualization. In order to meet the requirement of big data analysis, Computational Science and high performance computing methods and algorithms are in real demand to solve the above challenges, including scalable mathematical methods and algorithms, parallel and distributed computing, cloud computing, etc. This workshop will focus on the issues of high performance data analysis. Theoretical advances, mathematical methods, algorithms and systems, as well as diverse application areas will be in the focus of the workshop.
    The 2015 workshop, which is the second in the series, aims at exploring emerging trends and focus on high performance data analysis. We welcome papers on all aspects of high performance data analysis, including, but not limited to:

    * Data processing exploiting hybrid architectures and accelerators (multi/many-core, CPUs, FPGAs)
    * Data processing exploiting dedicated HPC machines and clusters
    * Data processing exploiting cloud
    * High performance data-stream mining and management
    * Efficient, scalable, parallel/distributed data mining methods and algorithms for diverse applications
    * Advanced methods and algorithms for Big Data Visualisation
    * Parallel and distributed KDD frameworks and systems
    * Theoretical foundations and mathematical methods for mining data streams in parallel/distributed environments
    * Applications of parallel and distributed data mining in diverse application areas such as business, science, engineering, medicine, and other disciplines

  • Workshop 05: Credit Evaluation and Management
    Zongfang Zhou, University of Electronic Science and Technology of China, China (zhouzf@uestc.edu.cn)

    Credit has been playing an increasingly important role in market transactions, especially after the global financial crisis. Since the advent of the era of big data, various information technologies, such as semantic learning, collaborative filtering, and probabilistic models, are becoming widespread and significant in practical operations and applications of credit evaluation and management. These data driving technologies which can not only integrate different aspects of credit, but also provide more comprehensive interfaces for credit management essentially challenge traditional methods of credit evolution both in accuracy and efficiency. This workshop focuses on the issues in contemporary method and technologies of credit evaluation and management, and aims to create a communication platform for researchers to share the recent and significant developments in the general area.
    Topics of interest in this workshop include, but not limited to:

    * Fundamental understanding of credit
    * Correlation and evolution of credit risk
    * Credit measurement
    * Credit risk integrated management
    * Multiple criteria decision making in a uncertain context
    * Metric system for credit evaluation
    * Credit rating
    * Portfolio management

  • Workshop 06: Intelligent Knowledge Management
    Jifa Gu, Academy of Mathematics and System Science, Chinese Academy of Sciences (zll933@163.com)
    Lingling Zhang, Management School of Graduate University of Chinese Academy of Sciences (zhangll@ucas.ac.cn)

    Knowledge or hidden patterns discovered by data mining from large databases has great novelty, which is often unavailable from experts* experience. Its unique irreplaceability and complementarity has brought new opportunities for decision-making and it has become important means of expanding knowledge bases to derive business intelligence in the information era. The challenging problem, however, is whether the results of data mining can be really regarded as ※knowledge§. To address this issue, the theory of knowledge management should be applied. Unfortunately, there appears little work in the cross-field between data mining and knowledge management.
    Intelligent Knowledge Management is the management of how rough knowledge and human knowledge can be combined and upgraded into intelligent knowledge. Intelligent Knowledge Management aims to bridge the gap between these two fields. This study not only promotes more significant research beyond data mining, but also enhances the quantitative analysis of knowledge management on hidden patterns from data mining.
    The main purpose of this workshop is to provide researchers and practitioners an opportunity to share the most recent advances in the area of data mining, expert mining, pattern refinement and intelligent knowledge management, to generate new methods to evaluate the mined patterns and determine directions for further research. Papers should present modeling approaches/perspectives to intelligent knowledge. The workshop is interested in topics related to all aspects of patterns evaluation, expert mining and intelligent knowledge.
    Topics of interest include, but are not limited to, the following:

    Intelligent Knowledge Management:
    * Knowledge synthesis
    * Expert Mining
    * Pattern Refinement
    * Interestingness Measures for Knowledge Discovery
    * Knowledge Presentation and Visualization Knowledge Evaluation
    * KDD Process and Human Interaction
    Intelligent Knowledge Management System
    * Intelligent Systems and Agents
    * Multi Agent-based KDD Infrastructure
    * Meta-synthesis and Advanced Modeling
    * Knowledge Reuse and Ontology
    * Knowledge Management Support Systems

  • Workshop 07: The Third Workshop on Optimization-based Data Mining
    Yingjie Tian, Chinese Academy of Sciences Research Center on Fictitious Economy and Data Science, China (tianyingjie1213@163.com)
    Yong Shi, Chinese Academy of Sciences Research Center on Fictitious Economy and Data Science, China/University of Nebraska at Omaha, USA (yshi@unomaha.edu)
    Zhiquan Qi, Chinese Academy of Sciences Research Center on Fictitious Economy and Data Science, China (qizhiquan@gucas.ac.cn)

    For last several years, the researchers have extensively applied quadratic programming into classification, known as V. Vapnik's Support Vector Machine, as well as various applications. However, using optimization techniques to deal with data separation and data analysis goes back to more than thirty years ago. According to O. L. Mangasarian, his group has formulated linear programming as a large margin classifier in 1960's. In 1970's, A. Charnes and W.W. Cooper initiated Data Envelopment Analysis where a fractional programming is used to evaluate decision making units, which is economic representative data in a given training dataset. From 1980's to 1990's, F. Glover proposed a number of linear programming models to solve discriminant problems with a small sample size of data. Then, since 1998, the organizer and his colleagues extended such a research idea into classification via multiple criteria linear programming (MCLP) and multiple criteria quadratic programming (MQLP), which differs from statistics, decision tree induction, and neural networks. So far, there are more than 200 scholars around the world have been actively working on the field of using optimization techniques to handle data mining and web intelligence problems. As the third one at the ITQM conference series, this workshop intends to promote the research interests in the connection of optimization, data mining and web intelligence as well as real-life applications.

  • Workshop 08: The Second Workshop on Data Mining and Social Network Analysis
    Peng Zhang, IIE, Chinese Academy of Sciences, China (zhangpeng@iie.ac.cn)
    Zhou Xiaofei, IIE, Chinese Academy of Sciences, China (zhouxiaofei@iie.ac.cn)

    Social media, such as Facebook, Flickr, Twitter, have become important mediums for information sharing and spreading, with rapidly increasing users over the past few years. Through the powerful effect of word-of-mouth, social media play a critical role in affecting people's opinions and behaviors. Social media analysis is an inherently interdisciplinary academic field emerged from social psychology, sociology, statistics and graph theory. The workshop aims to draw together empirically-grounded and theoretically-informed researchers to discuss the key issues in contemporary social network analysis and mining methods across disparate fields and methodologies. The workshop also solicits high-quality original research papers in any aspect of data mining and social network analysis. Contributions are invited that address a range of related issues.
    Areas for consideration could include, but are not limited to:

    * Social Web Search
    * Graph data and networks
    * Algorithms and Systems for Social networks
    * Distributed and Parallel Algorithms
    * Big Data Search Architectures, Scalability and Efficiency
    * Social Data Acquisition, Integration, Cleaning, and Best Practices
    * Visualization Analytics for Social Network Data
    * Computational Modeling and Data Integration
    * Large-scale Recommendation Systems for Social Media
    * Cloud/Grid/Stream Data Mining
    * Link and Graph Mining
    * Semantic-based Data Mining and Data Pre-processing
    * Mobility and Social Network Data
    * Multimedia and Multi-structured Data Analysis

  • Workshop 09: The Second Workshop on Quantitative Finance (QF2015)
    Xianhua Wei, University of Chinese Academy of Sciences, China (weixh@ucas.ac.cn)
    Guitai Chi, Dalian University of Technology, China (chigt@dlut.edu.cn)
    Weixing Wu, University of International Business and Economics (wxwu@uibe.edu.cn)
    Yonghui Wang, Director-General of China QClub (wangyonghui@phfund.com.cn)

    Since Markowitz's portfolio selection theory in 1950s, statistics and mathematics have been applied in finance and investment management. Lots of empirical studies showed that historical data analysis using proper mathematical models helped to test financial and economical theory, and improve investment performance in practice as well. Since the appearance of Black-Scholes model for pricing option, mathematics, information technology and finance have tended to infuse. The nature of complexity of financial instruments requires more sophisticated mathematical models and computer tools to extract information about risk and return from noisy data. Quantitative finance is a cross-disciplinary field relies on mathematical finance, intelligent methods and computer simulations to make trading, hedging and investment decisions, as well as facilitating the risk management of those decisions. This special workshop tends to promote the research interests both in academic community and industrial community in the connection of (i) mathematics and statistics, (ii) information technology, (iii) finance and economics.
    The workshop calls for papers to the researchers and professors from universities and academic institutions in the above interface fields for their participation in the conference. The workshop welcomes both high-quality academic papers (theoretical or empirical) in the broad ranges of quantitative finance related topics including, but not limited to the following:

    Theory
    * Asset Pricing Theory
    * Portfolio Selection Theory
    * General Equilibrium Theory
    * Rational Expectation Theory
    * Term Structure Theory
    * Arbitrage Theory
    * Hedging and Trading Theory
    * Insurance and Actuarial Theory
    Applications
    * Portfolio Optimization
    * Asset Pricing and Valuation
    * Financial Time Series Forecasting
    * Credit Risk Modeling
    * Interest/Exchange Rates Determination
    * Financial Derivatives Pricing and Trading
    * Basel III, Solvency II and Risk Management
    * Emerging Markets Issues
    * Extreme Events and Volatility Modeling
    * Financial and Econometrics Modeling

    The workshop will invite some senior quantitative analysts, portfolio managers and fund managers from famous securities companies, fund management companies and asset management companies for their participation at the conference. The participants will discuss the development of quantitative asset management, share experience and lessons of their own practices, and also work on the solutions to domestic application problem of quantitative finance. The afternoon session will be arranged as a forum. 2-3 invited speakers will give their keynote speeches (to be decided) focusing on quantitative asset management. Free discussion follows after afternoon tea break.

  • Workshop 10: On Supporting Informed Decision-Making in Real-Time: Where Environmental Sensing Meets the Data Analytics
    Paulo de Souza, CSIRO, Australia (Paulo.Desouzajunior@csiro.au)

    A number of businesses are touched by environmental changes. Such economical activities include farming, forestry, aquaculture, tourism, energy generation and distribution, logistics, and insurance. Environmental factors have also an impact on public health such as those evidenced by epidemiological studies. Human behaviour is as well influenced by environmental conditions; for example, recreation choices are different from rainy to sunny days. This workshop is designed to discuss the key science questions arising from initiatives such as building correlation between environmental sensing data, business operations and social behaviour. This would include issues related to big data, data streaming, web mining, decision analysis, quality assurance and quality control, data re-purposing, risk management, visualisation, interoperability, provenance, open-data policies, to mention some. And more importantly, what it takes to design, implement and maintain a sustainable infrastructure to support informed decision-making in Real-Time. The workshop is aiming at being an attractive forum for deep technical discussions about key aspects of this research area. Invite speakers from leading global companies and international research groups and organizations will be invited.

  • Workshop 11: Analytics in Education
    Lotfollah Najjar, University of Nebraska at Omaha (lnajjar@unomaha.edu)
    Leah Pietron, University of Nebraska at Omaha, USA (lpietron@unomaha.edu)

    The growth of on-line learning, massive open on-line courses (MOOCs), technology use within the classroom, and systems that track student behavior during 每 and sometimes outside of 每 learning activities has created an abundance of data that could, in principle, be used to improve student learning and retention, personalize instruction, inform decision-making by students and educators, and lead to the development of improved learning systems and experiences. Many challenges face those individuals and institutions that seek to leverage the mounds of data generated during the learning process into the desired benefits. Linking raw data and valid analysis to effective learning adaptations and educational interventions to improved outcomes remains an uncertain endeavor. This session invites contributions that advance our understanding of the use of ※big data§ to improve learning outcomes and educational processes. Topics include, but are not limited to, learning analytics, educational data mining, evidence centered design, adaptive learning systems, data visualization, personalized instruction systems, and student performance prediction.

  • Workshop 12: Big Data & Social Governing
    Jie Cao, Nanjing University of Information Science & Technology, China (cj@amss.ac.cn)
    Tinghuai Ma, Nanjing University of Information Science & Technology, China

    Big data means explosive growth in the volume, velocity, variety and value of the data created on a daily basis. How to use big data and data analysis technique is a big problem for social governing. The ※intelligent city§ aggregates volume data from everywhere, shares the velocity and variety in time, and find the data*s value. The big data*s storage, analysis, application need distributed network processing, data mining techniques. Future more, explore into big data will lead to privacy leakage. The data analysis technique and data privacy protecting technique are conflict dilemma.
    Social governing is using national power to deal with major social, economic and political problems. Obviously, the big data collected by government is help for governing. Such as using opinion mining, text mining, social network analysis etc. discover the emergency, public sentiment trends. Also, Emergency management needs big data to support decision.
    The main goal of this special session is to help of unify and streamline research on big data and social governing on internet. We want to bring together researchers from cloud computing, big data, data mining, online information security and privacy, public administration. We also encourage submissions related to recommendation system, location aware technique and applications.

    Topics of Interest:
    * Big Data Science and Foundations
    * Big Data Infrastructure
    * Big Data Management
    * Big Data Search and Mining
    * Big Data Security & Privacy
    * Big Data Applications
    * Basic theory of network governing
    * Performance of e-government
    * Case based network governance
    * General mechanism of network governances
    * Trend of social development and the government's response in big data age

  • Workshop 13: Hydrothermal Dispatch 每 Scenarios Generation and Management
    Reinaldo Castro Souza, Pontifical Catholic University of Rio de Janeiro, Brazil (reinaldo@ele.puc-rio.br)
    Fernando Cyrino, Pontifical Catholic University of Rio de Janeiro, Brazil (cyrino@puc-rio.br)
    Hugo Ribeiro Baldioti, Pontifical Catholic University of Rio de Janeiro, Brazil (baldioti@ele.puc-rio.br)

    Alternative scenarios generating models are being developed through the years to be used in medium-term hydrothermal operation energy planning. These studies are motivated by the ongoing necessity of investigating improvements in the current pattern.
    Complex hydrothermal systems are highly dependent on good forecasting. To coordinate, plan and operate such a system in its full complexity, it is necessary to know in advance the volume of water available in the reservoirs of each plant. That is, you must know the volume of water that will be available for power generation in order to, from that information, estimate the amount of energy that this plant can produce optimally, reducing costs and increasing reliability.
    There are several ways to perform this type of modeling, since techniques using time series models, for example, auto regressive models, periodic auto regressive, even computational intelligence models. Within existing possibilities to generate future scenarios (short, medium and long term), there are totally innovative techniques or just to improve the existing models to generate synthetic scenarios.
    The hydrothermal dispatch problems are basically divided in two steps: scenarios generation and optimization. Decisions are based on the results of these simulations.
    The goal of this workshop is to propose a discussion of existing methodologies for generating scenarios applied to the hydrothermal dispatch and how to evaluate them in order to manage such systems.
    This workshop will focus on the issue of theoretical and application of methods 每 in research, in development, or already implemented. The topics of interest include but are not limited to:

    * Scenarios Generation models;
    * Case studies of real hydrothermal systems;
    * Optimization techniques applied to the hydrothermal dispatch;
    * Time Series Models Evaluation;
    * Multiple criteria decision analysis;
    * The impact of new energy sources in the hydrothermal dispatch;

  • Workshop 14: How to prepare businesses for changes? Enterprise Architecture, big data, cloud computing and mobile: the way to structuring the information
    Andr谷 Figueiredo, SENSEDIA (andre.figueiredo@sensedia.com)
    Fabio Reginaldo, IBMEC, Quode Project, International Institute of Learning (fabioreginaldo@yahoo.com)

    For many years the competitiveness already shown increasingly fierce, and companies need to improve, innovate, or even reduce operations to stay focused on their business and generate their so expected profits. The knowledge about the technologies, methods or tools has become common, and are increasingly treated as commodities.
    It becomes really intriguing to imagine that companies cannot fully know that their processes which cannot be an organizational asset, but property of their employees. This is a reality of most companies.
    According to Gartner Group, Process was the main concern of CEOs in 2009. It has been realized on the market a movement of companies to reach improvements, but appeared new technological challenges that sometimes run over the strategic planning, such as: Cloud Computing, Big data, Internet of Things, Mobile, and more. Understanding the Company, if its product is good, or if its customers were satisfied, is no longer only within their structures, but in social networks.
    How companies should be structured for this new setting and technologies? Customers are not the same, then how to react to this new scenario?
    The Enterprise Architecture is a way for businesses to be prepared for major changes, because only itself knowing in detail may change quickly and efficiently and safely. This workshop aims to bring the discussion papers that present results, case studies, use of methods and practices about Enterprise Architecture considering the new technological and social scene.
    The topics of interest are:

    * Requirements Taxonomy for structuring problems in MCDA
    * Enterprise Architecture as the basis for Support Systems Decision Making
    * Enterprise Architecture in general
    * Enterprise Architecture addressing cloud computing structure
    * Enterprise Architecture addressing definition of business requirements that consider social networks and Big Data
    * Mathematical and statistical systems that support implementation of Enterprise Architecture
    * SOA
    * TOGAF, an Open Group Framework
    * ZACHMAN Framework
    * Business Intelligence that support Enterprise Architecture and Big Data
    * Systemic Solutions for Enterprise Architecture
    * Strategic planning solutions with implementation of Enterprise Architecture

  • Workshop 15: Quantitative Methods Applied to Investment Analysis
    Rodrigo Novinski, professor of economics, Ibmec-RJ (rodrigo.novinski@ibmecrj.br)
    Sergei Vieira, professor of economics, Ibmec-RJ (sergei.vieira@ibmecrj.br)

    Since the seminal works such as Markowitz (1952) and Sharpe (1964), the modeling of investment and the subsequent creation of tools that enable the implementation of these has been an object of particular interest to scholars and practitioners. This session aims to present some recent developments in mathematical, statistical and computing techniques on this topic as well as the insights gained with its application to the real world. We invite researchers, student and market professionals to contribute to this session by sending high quality academic papers related (but not limited) to the following themes:

    * Asset Pricing with Transaction Costs
    * Coherent Downside Risk Measures
    * Dynamic Factor Models
    * High-Frequency Trading
    * Stochastic Volatility
    * Stochastic Optimization

  • Workshop 16: Scientific Data Analysis and Decision Making
    Dengsheng Wu, Assistant prof., Institute of Policy and Management, Chinese Academy of Sciences, China (wds@casipm.ac.cn)
    Yuanping Chen, Assistant prof., Computer Network Information Center, Chinese Academy of Sciences, China (ypchen@cashq.ac.cn)
    Xianyu Lang, Associate prof., Computer Network Information Center, Chinese Academy of Sciences, China (xylang@cashq.ac.cn)

    As E-science has emerged as a persistent and increasingly large part of the research enterprise, scientists are exploring new roles, services, staffing, and resources to address the issues arising from this new mode of research. Scientists use computer modeling and simulation programs to test and produce new theories and experimental techniques, often generating and accumulating vast amounts of data. Ideally, that data could be shared with other scientists, for reuse and re-analysis, ultimately speeding up the process of scientific discovery. The collection and utilization of scientific data are the two major features that characterize e-Science. The scientific data are generated by different aspects and departments in the management activities of research institutions, and are decentralized-managed and separated-stored, which generates the difficult to share and manage the scientific data. Furthermore, the global sharing of data has promoted interdisciplinary teamwork on complex problems, and has enabled other researchers to use data for different purposes. The main purpose of this workshop is to provide researchers and practitioners an opportunity to share the most recent advances in the area of data science and decision analysis for e-science. The workshop aims to create a communication platform for researchers to share the recent and significant developments in the general area.
    Topics of interest include, but are not limited to, the following:

    * Metadata standard of scientific data
    * Scientific data quality analyzing
    * Scientific data integration and sharing
    * ETL process for scientific data
    * Scientific data visualization
    * Decision analysis modeling from scientific data
    * Network analysis from scientific data
    * Bibliometrics analysis from scientific data

  • Copyright © ITQM 2015  All rights are reserved.