Accepted Special Sessions and Workshops


Index:

Special Sessions:

  • Special Session 01: Soft computing methods in quantitative management and decision making processes

    Florin Gheorghe Filip, Romanian Academy, Romania. (ffilip@acad.ro)
    Ioan Dzitac, Agora University of Oradea & Aurel Vlaicu University of Arad, Romania. (professor.ioan.dzitac@ieee.org)
    Simona Dzitac, University of Oradea, Romania. (simona@dzitac.ro)

    In according with Zadeh*s definition, Soft Computing (SC) is based on Fuzzy Logic, Neural Networks, Support Vector Machines, Evolutionary Computation, Machine Learning and Probabilistic Reasoning. SC can deal with ambiguous or noisy data and is tolerant of imprecision, uncertainty, partial truth, and approximation. In effect, the role model for SC is the human mind. Artificial Intelligence and Computational Intelligence based on SC provide the background for the development of smart management systems and decisions in case of ill-posed problems.
    In many real-world situations, the problems of decision making are subjected to some constraints, objectives and consequences that are not accurately known. After Bellman and Zadeh introduced for the first time fuzzy sets within multiple-criteria decision-making (MCDM), many researchers have been preoccupied by decision making in fuzzy environments. The fusion between MCDM and fuzzy set theory has led to a new decision theory, known today as fuzzy multi-criteria decision making (FMCDM), where we have decision-maker models that can deal with incomplete and uncertain knowledge and information. The most important thing is that, when we want to assess, judge or decide we usually use a natural language in which the words do not have a clear, definite meaning. As a result, we need fuzzy numbers to express linguistic variables, to describe the subjective judgement of a decision maker in a quantitative manner. Fuzzy numbers (FN) most often used are triangular FN, trapezoidal FN and Gaussian FN. We highlight that the concept of linguistic variable introduced by Lotfi A. Zadeh in 1975 allows computation with words instead of numbers and thus linguistic terms defined by fuzzy sets are intensely used in problems of decision theory for modelling uncertain information. After Atanassov introduced the concept of intuitionistic fuzzy sets, where each element is characterized by a membership function, as in fuzzy sets, as well as by a non-membership function, the interest in the study of the problems of decision making theory with the help of intuitionistic fuzzy sets has increased.
    The goal of this special session is to bring together researchers interested in applications of soft computing algorithms and procedures in quantitative management and decision making, in order to exchange ideas on problems, solutions, and to work together in a friendly environment.
    Topics of interest include, but are not limited to, the following:

    - Ant colony optimization algorithms;
    - Artificial intelligence methods for web mining;
    - Bayesian networks and decision graphs; Computational intelligence methods for data mining;
    - Decision support systems for quantitative management;
    - Decision making with missing and/or uncertain data;
    - Fuzzy multi-criteria decision making;
    - Fuzzy and neuro-fuzzy modelling and simulation;
    - Fuzzy numbers applications to decision making;
    - Fuzzy-sets-based models in operation research;
    - Knowledge Discovery in Databases;
    - Machine learning for intelligent support of quantitative management;
    - Neural networks in decision making tools;
    - Smarter decisions;
    - Support Vector Machine in SC applications;

  • Special Session 02: 5rd Intelligent Decision Making and Extenics based Innovation

    Xingsen Li, NIT, Zhejiang University, China. (lixs@nit.zju.edu.cn)
    Chunyan Yang, Guangdong University of Technology, China. (fly_swallow@126.com)
    Yanwei Zhao, Zhejiang University of Technology, China. (zyw@zjut.edu.cn)
    Ping Yuan, NIT, Zhejiang University, China. (yuanping1212@163.com)
    Chaoyi Pang, Commonwealth Scienceand Industries Research Organization(CSIRO), Australia. (Chaoyi.Pang@csiro.au)

    With the rapid development of information technology, knowledge acquisition through data mining becomes one of the most important directions of scientific decision-making. Extenics is a new inter-discipline of mathematics, information, philosophy, and engineering including Extension theory, extension innovation methods and extension engineering. It is dedicated to exploring the theory and methods of solving contradictory problems uses formalized models to explore the possibility of extension and transformation of things and solve contradictory problems intelligently. The intelligent methods aim to provide targeted decision-making on the transformation of the practice which is facing the challenges of data explosion. Artificial intelligence and intelligent systems beyond big data offer efficient mechanisms that can significantly improve decision-making quality. Through ITQM, participants can further discuss the state-of-art technology in the Intelligent Decision Making and Extenics based Innovation field as well as the problems or issues occurred during their research. The topics and areas include, but not limited to:

    - Extenics based Information methods and technology;
    - Intelligent knowledge management based on Extenics;
    - Intelligent Information Management and Problem Solving on Extenics;
    - Knowledge Mining on E-business;
    - Intelligent Systems and its Applications based on Extenics;
    - Intelligent Logistics Management and Web of Things combined with Extenics;
    - Web Marketing and CRM;
    - Intelligent Data Analysis and Financial Management;
    - Intelligent technology and Tourism Management;
    - Innovation theory and Extenics based Methods;
    - Extenics based Decision Making;
    - Extension data mining and its Applications;
    - Web Intelligence and Innovation on big data;
    - Knowledge based Systems and decision-making theory combined with Extenics;
    - Soft power and soft technology;
    - Big data technology and applications;

  • Special Session 03: Understanding Financial Risk via Big Data

    Xiaoguang Yang, Academy of Mathematics and Systems Science, Chinese Academy of Sciences, China. (xgyang@iss.ac.cn)
    Haizhen Yang, University of Chinese Academy of Sciences, China. (haizheny@ucas.ac.cn)
    Jichang Dong, University of Chinese Academy of Sciences, China. (jcdong@ucas.ac.cn)

    Financial activity is the core activity of mankind in modern time. Financial risk is the key part of any financial activity. Although the aim of any financial activity is very clear, namely making money, the ways to make money are very diverse and complicated. Almost all kinds of tricks which human beings could image can be found here. How to understand financial risk is an interesting and challenging thing. Big data provides a tremendous space to explore the property of financial risk for the various financial activities. In fact, Financial system itself has the best data record, not only records each (potential) transaction, but also produces summary reports in many levels. What's more, most human activities are related to money, so most records about human beings contain financial information. IT provides vast (digital and textual) data related to finance. These data have the obvious 5Vsㄩvolume, variety, velocity, veracity, value. They are typical big data. These big data managed to record all kinds of people's behavior related to financial risks, just like what Chinese old saying "nothing could escape from the God's net". Therefore, the big data will offer a great opportunity to reveal things behind data.
    The proposed session likes to bring together academic researchers, practitioners working in the area to present their up-to-date research results, explain demands from industry, exchange ideas and discuss the future directions. The topics include (but not exclude) as follows:

    - Models for market risk, credit risk, operational risk, liquidity risk, country risk, etc, in different financial markets;
    - Determinants and mechanisms of various financial risks under different market conditions;
    - Market participant risk behavior via big data;
    - Financial contagion, systemic risk, and financial stability under big data view;
    - Financial risk management, financial regulation under big data;
    Information of the organizers:
    Xiaoguang Yang, professor of management science at Academy of Mathematics and Systems Science, Chinese Academy of Sciences, and vice-director of Institute of Systems Science, Chinese Academy of Sciences, former chairman of sub-society of financial system engineering, Society of System Engineering of China (he serves as the current secretary-in-general of the Society of System Engineering of China). He won the national highest award in the management science.
    Haizhen Yang, professor of finance at the University of Chinese Academy of Sciences, the professor in the Center of Fictitious Economics and Data Science of Chinese Academy of Sciences. She serves as the consultants for many financial institutions. She is the former vice president of Economic Research Academy of Xinjiang Uyghur Autonomy Region and won the second class prize of scientific development of the Region.
    Jichang Dong, professor of data science at the University of Chinese Academy of Sciences, and vice-dean of School of Economics and Management. He serves as the council member of national education committee of management under the State Council. He won the national outstanding award for the young scientists.

  • Special Session 04: Digital Marketing

    Vandana Ahuja, Jaypee Business School, India. (vandana.ahuja@jiit.ac.in)

    Technological advances and the speed with which new technologies are being embraced by corporates, along with the rising power of the consumers and their ability to get what they want, when they want it, from whomever they want, have opened up new challenges for marketing. With this in mind, the need for understanding the digital world and its application becomes one of the greatest competitive aspects for a business*s survival. The buzzword of globalization holds no meaning without the concept of what is being termed as &Digitization*.
    This special session on Digital Marketing integrates concepts form the virtual world and analyses how the field of Marketing can benefit from these developments.
    The contents of the special session will be as follows:

    - Digital Marketing in a Digital Ecosystem;
    - The Online Marketing Mix;
    - Online Branding;
    - Building Online Traffic;
    - Engagement Marketing through Content Management;
    - The web and the consumer decision making process;
    - Using Online Communities for Marketing;
    - Consumer Generated Media(CGM);
    - Mining CGM;
    - Digital marketing Case Studies;
    Dr. Vandana Ahuja has 18 years of experience across the corporate sector and academia. She is the author of Digital Marketing - A book published by Oxford University Press. She has been actively researching the domain of the collaborative web, with focus on its contributions to the fields of Marketing and CRM and has several years of research experience. She has published several manuscripts in International and National Journals. Her research work has found place in the curriculum being offered by the Digital Marketing Institute, Middlesex, UK. She also serves on the Editorial Board of several International Journals. At Jaypee Business School, she is the Area-Chair, Marketing and teaches Sales and Distribution Management, Social Media and E- Marketing, and B2B Marketing. She can be contacted at vandyahuja@yahoo.com, vandana.ahuja@jiit.ac.in.

  • Special Session 05: Strategies to Develop Trade Data Exchange Mechanism: With Special Reference to South Asian members of APTA

    Debdeep, Jaypee Business School, Jaypee Institute of Information Technology, India. (debdeep.de@jiit.ac.in)

    Asia-Pacific Trade Agreement (APTA) is one of the major preferential trade agreement in the Asia-Pacific region which aims to liberalize and expand trade progressively through mutual relaxation of tariff and non-tariff measures and the pursuit of various other means of economic cooperation consistent with the members* respective present and future development and trade needs. However, utlisation of APTA preferences by exporters from India and China declined in recent years. Fluctuation of utlisation of APTA preferences is quite significant thus if APTA has to reach its full potential and fulfill its role as a force for regional integration the coverage should be significantly increased. Export or import procedures in most developing countries of the region still take through a cumbersome process and time taken is also too high. In Sri Lanka, instead of standard packages used internationally it relies on internally developed software and manual checking to make the data error free. Hence, data collection, collation and reporting take longer time to report. It has been realised from earlier studies that data mismatch among APTA members are quite significant. Also, many of the APTA members either don*t collect trade data through preferential route or collect it haphazardly without having a proper reporting system. APTA is planning to address the data issues holistically developing a &trade data exchange* programme. Presently, APTA is the only regional trade agreement involving major trading powers of the region. APTA members are committed to expanding their cooperation into significant issues which include investment and trade facilitation.
    Against this backdrop the workshop focuses on trade data reporting which is quite inconsistent amongst the developing nations and thus the workshop would help to understand and identify the resurrecting measures for effective trade data exchange. India being one of the biggest countries among the APTA members is now getting integrated with Asian countries and the workshop will make an attempt to address how APTA will act as catalyst to accentuate its &Look East* dream.
    Objectives:
    - To understand the ground reality of the data collection system, bottlenecks within the countries, international challenges in handling, negotiating issues, monitoring aspects etc.
    - The study aims to develop a strategic roadmap for select South-Asian members of APTA.
    Learning Outcomes:
    - This would help the researchers and students to understand technical issues related to trade facilitation, data harmonization and its collection.
    - This would also sensitize the real integration issues related to trade that is faced by an exporter/importer from a realistic perspective.
    - To appreciate and analyse the pursuance of Look-East dream through maximum utilization of APTA.
    Brief Profile of the Organiser:
    Debdeep is currently teaching at the Jaypee Business School, in the area of International Economics and Management. He has advised projects sponsored by international organisations like The South Asian Network for Development and Environmental Economics, United Nations Economic and Social Commission for Asia and the Pacific and national public sector organisations like Agricultural & Processed Foods Exports Development Authority on several issues related to regional trade and formation of international production networks, economics of environmental regulation, regional integration, development planning, state export promotion measures, etc. He has a number of publications to his credit at national and international levels. He is also a Consultant to the Ministry of Commerce, Govt. of India and Confederation of Indian Industry. He can be reached at debdeep.de@jiit.ac.in.

  • Special Session 06: Reproducible Research using R

    Moonis Shakeel, Jaypee Business School, Jaypee Institute of Information Technology, India. (moonis.shakeel@jiit.ac.in)

    Results from scientific research have to be reproducible to be trustworthy. We do not want a finding to be merely due to an isolated occurrence, e.g., only one specific laboratory researcher can produce the results on one specific day, and nobody else can produce the same results under the same conditions. Jon Claerbout at Stanford University (Fomel and Claerbout, 2009) first proposed the term, reproducible research. The idea is that the final product of research is not only the paper itself, but also the full computational environment used to produce the results in the paper such as the code and data necessary for reproduction of the results and building upon the research. Fortunately, journals have been moving in that direction as well.
    Why Should Research Be Reproducible?
    * Standard to judge scientific claims
    The ※ultimate standard§ for evaluating scientific claims is whether or not the claims can be replicated[Peng, 2011, Kelly, 2006]. Reproducibility enhances replicability. If other researchers are able to clearly understand how a finding was originally made, then they will be better able to conduct comparable research in meaningful attempts to replicate the original findings.
    * Better work habits
    Making a project reproducible from the start encourages you to use better work habits. It can spur you to more effectively plan and organize your research. It should push you to bring your data and source code up to a higher level of quality than you might if you ※thought &no one was looking*§[Donoho, 2010, 386].
    * Better teamwork
    The steps you take to make sure an independent researcher can figure out what you have done also make it easier for your collaborators to understand your work and build on it. This applies not only to current collaborators, but also future collaborators. Bringing new members of a research team up to speed on a cumulatively growing research project is faster if they can easily understand what has been done already[Donoho, 2010, 386].
    * Changes are easier
    A third person may or may not actually reproduce your research even if you make it easy for them to do so. But, you will almost certainly reproduce parts or even all of your own research. No actual research process is completely linear. You almost never gather data, run analyses, and present your results without going backwards to add variables, make changes to your statistical models, create new graphs, alter results tables in light of new findings, and so on.
    * Higher research impact
    Reproducible research is more likely to be useful for other researchers than non-reproducible research. Useful research is cited more frequently[Donoho, 2002, Piwowar et al., 2007, Vandewalle, 2012].
    The topics and areas include, but not limited to:

    - Introducing Reproducible Research;
    - Getting Started with R, RStudio, and knitr/rmarkdown;
    - File Management;
    - Data Gathering and Storage;
    - Analysis and Results;
    - Presentation Documents;

  • Special Session 07: Smart Energy Systems: The Need to Incorporate Homeostasis-Based Control Systems in the Design of Sustainable Energy Systems (SES)

    Franco Fernando Yanine, Universidad Finis Terrae, Santiago, Chile. (fyanine@uft.cl)
    Felisa M. C車rdova, Universidad Finis Terrae, Santiago, Chile. (felisa.cordova@gmail.com)
    Antonio Sanchez-Squella, Universidad Tecnica Federico Santa Maria. (Antonio.sanchez@usm.cl)

    Ever since Cannon first formulated the concept of homeostasis, over 80 years ago [1,2], attention on homeostasis has been largely focused on its role in medicine and biology to find cures for diseases like diabetes and obesity for example. Henceforth research on the subject focused chiefly on the corrective responses initiated after the steady state of the organism is perturbed. However, the concept of homeostasis求as important as it is in medicine and biology求has also been applied to Electric Power Systems (EPS) research [3-9], and in fact it should be extended not only to include reactive homeostasis but also the precise homeostatic control mechanisms that can be designed to enable a sustainable energy system (SES) to predict when environmental challenges are approaching or are most likely to occur and how could the perturbation or disruption impact the EPS [3-9].
    Sustainable energy systems encompass both reactive and predictive homeostasis operating recursively and in coordination with one another in the face of an environmental challenge. Reactive homeostasis (RH) in SES, as the name suggests, is a feedback-enabled mechanism driven by energy generation and supply versus consumption or expenditure of energy. This can be engineered in SES by employing sensors, control limit actuators (for example set-point fired responses) and AI algorithms that allow the system to make decisions that respond to changes in a predetermined array of systems control variables. Thus SES take actions to counteract or fend off adverse conditions and noise that may affect the system*s normal operation.
    On the other hand predictive homeostasis (PH) mechanisms generate responses well in advance of potential or possible challenges, once the system has reached a threshold signaling a predetermined degree of likelihood that an event will occur. Hence there are a set of precise SES responses that come about in anticipation of predictable environmental challenges. Such PH responses enable the energy system to immediately prepare itself, taking the necessary precautions and actions to adapt and even reconfigure itself if necessary, in order to respond to the challenge ahead of time. Such actions may come in several forms and will depend on the resources and intelligence built into the system, but they are all geared towards making the SES more secured and able to withstand the upcoming challenge by activating its readiness control mechanisms. Actions may come differently in magnitude and timeliness; some may be big and come immediately to adjust parts of the SES operation while others may come in the form of smaller changes in the system, largely as a result of stage-by-stage preparedness protocol building over time. The decision of which changes will occur first, where and how big they will be will be determined by both RH and PH control mechanisms engineered in the SES. Some may come very soon while others may come a longer time in advance of a probable environmental challenge.
    References
    [1] W. B. Cannon, Vob. IX JULY, 1929 No. 3 ※ORGANIZATION FOR PHYSIOLOGICAL HOMEOSTASIS§. Physiological reviews, 9(3).
    [2] W. B. Cannon, ※Stresses and strains of homeostasis.§ The American Journal of the Medical Sciences, 189(1), (1935): 13-14.
    [3] F.F. Yanine, & Sauma, E. E. ※Review of grid-tie micro-generation systems without energy storage: Towards a new approach to sustainable hybrid energy systems linked to energy efficiency§. Renewable and Sustainable Energy Reviews, 26, (2013): 60-95.
    [4] F.F. Yanine, Caballero, F. I., Sauma, E. E., & C車rdova, F. M. ※Homeostatic control, smart metering and efficient energy supply and consumption criteria: A means to building more sustainable hybrid micro-generation systems§. Renewable and Sustainable Energy Reviews, 38, (2014):235-258.
    [5] F.F.Yanine, Caballero, F. I., Sauma, E. E., & C車rdova, F. M. ※Building sustainable energy systems: Homeostatic control of grid-connected microgrids, as a means to reconcile power supply and energy demand response management§. Renewable and Sustainable Energy Reviews, 40, (2014):1168-1191.
    [6] F.F. Yanine, Sauma, E. E., & Cordova, F. M. ※An exergy and homeostatic control approach to sustainable grid-connected microgrids without energy storage§. In Applied Mechanics and Materials (Vol. 472, 2014, March; pp. 1027-1031).
    [7] F. Caballero, Sauma, E., & Yanine, F.,※Business optimal design of a grid-connected hybrid PV photovoltaic-wind energy system without energy storage for an Easter Island's block§. Energy, 61, (2013):248-261.
    [8] F.F. Yanine, C車rdova, F. M., & Valenzuela, L. ※Sustainable Hybrid Energy Systems: An Energy and Exergy Management Approach with Homeostatic Control of Microgrids§. Procedia Computer Science, 55, (2015):642-649.
    [9] F. Yanine, & C車rdova, F. M., ※Homeostatic control in grid-connected micro-generation power systems: A means to adapt to changing scenarios while preserving energy sustainability§. In Renewable and Sustainable Energy Conference (IRSEC) 2013 International March 2013 (pp. 525-530). IEEE.

  • Special Session 08: Event Studies in Finance

    Chhavi Mehta, International Management Institute, Delhi. (chhavi.mehta@imi.edu)
    Reena Nayyar, International Management Institute, Delhi. (reena.nayyar@imi.edu)

    Event study methodology is one of the most popular statistical research designs in the area of finance. It isused to examine the market*s response to a well-defined event by examining the security prices around suchevents.The impact on the security prices enables the researcher to differentiate between the wealth creating events and wealth destroying events. Event study not only has its application in evaluating the impact of financial events on the stock prices, it can also be useful in studying the impact of various events in other areas of management.
    The contents of the special session will be as follows:
    - Event Study Framework;
    - Defining an event;
    - Normal and Abnormal Returns using different Return Models;
    - Statistical Testing of Abnormal Returns;
    - Use of Event studies in Financial Research;
    Dr. Chhavi Mehta is a Ph.D. from IIT Delhi and PGDM from T. A. Pai Management Institute, Manipal. She has more than twenty years of diverse experience in Teaching, Training, Research and Consultancy. She is working with International Management Institute, Delhi, one of the top ranking private B-schools in India. She teaches Financial Management, Financial Accounting and Analysis, Cost and Management Accounting, Business Valuation, Management of Financial Services, Financial Markets. Her research papers have been published in various international and national refereed journals. She reviews research paper for various journals. She regularly conducts Executive Development Programs in finance area especially on Financial Statements Analysis and Finance for Non-finance Executives. Currently, she has a consulting assignment of Insurance Regulatory Authority of India. She can be contacted at chhavi.mehta@imi.edu, mchhavi@gmail.com.
    Dr. Reena Nayyar is a Ph.D. from Guru Nanak Dev University, Amritsar, Punjab. She has more than six years of experience in teaching, training and research. She is working with International Management Institute, Delhi. She has worked with Indian Institute of Management, Kozhikode and Indian Institute of Management,Rohtak. She teaches Financial Accounting, Management Accounting and Financial Management. Her principal area of research is Mergers and Acquisitions. She has published various research papers in international and national journals of repute.She has reviewed research papers for many international journals. She can be contacted at kohlireena@gmail.com and reena.nayyar@imi.edu.

  • Special Session 09: NeuroManagement - NeuroMarketing

    Felisa M. C車rdova, University Finis Terrae, Santiago, Chile. (fcordova@uft.cl)
    Rogers Atero, University Finis Terrae, Santiago, Chile. (rogers.atero@uft.cl)
    Hern芍n D赤az, University of Santiago de Chile, Santiago, Chile. (hernan.diaz@usach.cl)
    Fredi Palominos, University of Santiago de Chile, Santiago, Chile. (fredi.palominos@usach.cl)

    Human decision making systems can depend on many factors, some of them are very rooted in ancestral phylogeny and some others are the result of our present history life, and as such, depend on our trained or dynamic changing preferences.
    When ever efforts are made to engineer neuro cognitive processes, assuming that there is prior knowledge and experience about the physiological and neuro logical elements and/or components of the systems which we aim to high light. Until now many of these components have been revealed thanks to the new technology involving brain stimulation and scanning, functional brain images, and image analysis.
    One of the spin-off consequences of the development of neuroeconomy, the neurobiology of decision making, was neuromarketing, the use of electrophysiological devices to capture human physiological activity during buying decisions to learn about their preferences, probabilities of choice and neural processes involved. Until know it has been established that few seconds before a risky decision specific nuclei of the brain start evaluating actual conditions until surpass a threshold since where it is possible to predict the following output.
    We present here a research joint venture that associate neurocognitive research of human behavior with neuromarketing empirical findings on decision making. The first objective of this enterprise is to develop novel and diverse ways to analyze, visualize and interpret human physiological data with the purpose to characterize functional processes of the brain at different timescales during performing different tasks.
    While the hitherto framework of neuromarketing has been the stimulus-response paradigm, the neurocognitive engineering approach is in search of answers in the mid and long term behavioral change timescale. This means that is deeply interested in process like teaching and learning as central processes and procedures of human communication, education and culture.

  • Special Session 10: Social Media Analytics for Business Applications

    Shikha Mehta, Jaypee Institute of Information technology, Noida. (shikha.mehta@jiit.ac.in)
    Parmeet Kaur, Jaypee Institute of Information technology, Noida. (parmeet.kaur@jiit.ac.in)
    Anuja Arora, Jaypee Institute of Information technology, Noida. (anuja.arora@jiit.ac.in)

    The broad ambit of social media attracted business industries to gain valuable insight about products/brands through business intelligence using social media. Social media has changed the environment and redefined the business marketing strategies. Now-a-days all business houses have their own brand page on social media sites such as Facebook and Twitter to reach out to large audience. Social media has now become a vibrant and lucrative medium to understand, analyze and mine varying business aspects corresponding to product, company and consumer to provide valuable business insights. In the last few years majority of the companies have been utilizing these social media content to compete, make decisions, enhance product features, and analyze consumer behavior etc. There are several major issues including product competitive analysis, decide brand promotion strategies, Product reputation & recommendation, high volume social data handling etc to redesign conventional business strategies in order to provide beneficial outcome for business growth .
    This special issue is seeking conceptual, experiential, scientific and technical papers offering new insights into the following topics, but is not limited to them:
    - Business Analytics and Predictive model for social media data;
    - Product and Brand Reputation Analysis for online Ecommerce applications;
    - Social Media marketing;
    - Opinion Mining;
    - Business Ego network;
    - Community detection;
    - Social Influence analysis;
    - Social media based Business Competitive Intelligence;
    - Social Commerce;
    - Social Media Visualization;
    - Big social data analytics;
    - Case studies of utilizing social media to gain Business insights;
    - Social media Analytics tool to explore business insight;
    - Social media for Business decision making;
    - Social data analytics for CRM;
    - Data Security in Social Media;
    - Sustainable Competitive Analysis;
    - Brand management through social media;
    - Advertisement content and consumer engagement analysis on social media;
    - Social Clouds and Analytics;
    Dr. Shikha Mehta received Ph.D. in Computer Science from University of Delhi in 2013. She has 15 years of academia experience. She is currently working as in Jaypee Institute of Information Technology, NOIDA, India. Her research interests include Nature Inspired Algorithms, Soft computing, Information Retrieval, Large Scale Global Optimization etc. She has published several research papers in many reputed international journals and conferences related to above mentioned research area. She has successfully organized special sessions and workshops. She can be contacted at shikha.mehta@jiit.ac.in
    Dr. Parmeet Kaur received PhD(Comp Engg) from NIT Kurukshetra, M.Tech. in Computer Science from Kurukshetra University , B.E.(Hons) in Computer Science and Engineering from P.E.C., Chandigarh. She is currently working in Jaypee Institute of Information Technology, NOIDA, and has an academic experience of 15 years. Her research interests include distributed computing, cloud computing, distributed databases, mobile computing and fault tolerance in distributed systems. She can be contacted at parmeet.kaur@jiit.ac.in
    Dr. Anuja Arora received PhD(Comp Engg) from Banasthali University, Rajasthan, she has 14 years of academia experience and 1.5 years of industry experience. She is working at Jaypee Institute of Information Technology, India. Her primary research interest includes web mining techniques, Social network analysis and mining, web testing, Semantic Web, and data mining. She has published several research papers in many reputed international journals and conferences related to above mentioned research area. She has been actively researching the domain of the Social media and social network analysis and mining, with focus on its contributions to the various research domains and has several years of research experience. She has successfully organized special sessions and workshops. She can be contacted at anuja.arora29@gmail.com, anuja.arora@jiit.ac.in.

  • Special Session 11: A scientific approach to market segmentation and validating the segmentation solution

    Neena Sondhi, International Management Institute, New Delhi. (neenasondhi@imi.edu)

    The most critical business decision in turbulent and fiercely competitive times is to be extremely focused in your business and marketing strategy . And this task begins with a clear definition of the firm*s target segment, so that an appropriate and distinct positioning may be designed. Though market segmentation has been structured and explained by Mc Donald and Dunbar(1995); it has evolved as a practice from the unidimensional demographic and geographic to the more complicated lifestyle based and more recently benefit based segmentation. These are more subjective practices and hence to convert them into quantitative methodology requires a triangulation and mixed method approach . Thus the criteria developed through qualitative processes need to be converted into tools that may be inclusive of multiple benefits the consumer uses while making a buying decision. These need to be subjected to grouping or clustering through a mix of factor and cluster analysis techniques.
    However, firms would like to further validate the findings of the solution with additional cross verification measures. Thus the solution is further validated through a series of both hierarchical and non-hierarchical cluster techniques and discriminant analysis . The solution then needs to be subjected to profiling methods so that the firm has a clear snap-shot of the cluster profile- in terms of the demographics; media habits as well as their likelihood of purchase intention. The session will thus look at the quantitative methodology of grouping validation and profiling that is used for arriving at a segmentation solution for a firm or a sector. The sessions would be illustrated with real time segmentation solution , the author has worked on for different products and services and how these have also been used to build an academic knowledge base through research papers. Through the ITQM the participants can further discuss the newer methodologies that are being used segmentation and the approaches that can be used to eliminate the possibilities of subjective bias and the problems and challenges when one looks at diverse segments who may seek similar benefits but may have widely different brand preferences . Thus the focus of the session would include but not be limited to the following topics:
    - The theory of market segmentation;
    - From qualitative to quantitative- Exploratory factor analysis of data;
    - Grouping and formulating market segments basis hierarchical and non-hierarchical methods;
    - Authenticating/validating the cluster solution using discriminant analysis;
    - Profiling the clusters in terms of demographics, media habits and purchase intentions;
    - Marketing/business decisions based on grouping techniques
    Neena Sondhi: Merit holder and Doctorate in Consumer Psychology from the University of Delhi. She has been trained in the case method of teaching and writing at the Harvard Business School and in monitoring and evaluating effectiveness of Social programs by ISB (Indian business school) and UNICEF. She has been in academics for the past two decades and her core areas of expertise are consumer behavior, consumer psychology, marketing research, core marketing.
    Dr Sondhi has conducted numerous Social and Organizational Research assignments- some of her clients have been from both government and non government organizations- like IDRC Canada; Planning Commission, Navdanya Foundation. She has also to her credit market research, market potential studies and consultancy assignments for reputed Indian and International organizations. An avid researcher and has to her credit a number of research papers and Case studies, published in National as well as International journals and publishing house. Recently her paper on organic consumption was judged as the best research paper 2015 by Emerald Publishing.
    She is also a prolific writer and writes for numerous Indian daily newspapers. Her co-authored book titled ※Research Methodology: concepts and cases§ has been widely adopted in prestigious business schools across the country and has been rated as one of the top 25 books in management education. Besides academics and research she also undertakes faculty development programs in business research and training programs (Both open and in-company) in Marketing Research, Marketing Communication and Negotiation skills, Consumer Behavior and Customer care. Dr Sondhi was awarded the ※best teacher in marketing management§ award in 2013 by the Dainik Bhaskar and Dewang Mehta foundation. Neena can be reached at neenasondhi@imi.edu.

  • Special Session 12: Cloud, Big Data and Analytics for a Successful Organization

    Nitin Upadhyay, Goa Institute of Management, India. (upadhyay.nitin@gmail.com or nitin@gim.ac.in)

    Cloud, Big Data and Analytics impeccably contribute to the success of organization on a modern, client/audience-driven marketplace, and are perceived as a very interesting research area from theoretical and practical perspectives. The conference session entitled ※Cloud, Big Data and Analytics for a Successful Organization§ is expected to exchange ideas and thoughts about impacts of Cloud, Big Data and Analytics research on the state of the art as well as upcoming trends of issues related to research and applications of these solutions for an organization that successfully faces modern market, organizational and societal challenges in a creative, innovative way. It provides a platform for the participants to present and discuss the most recent, innovative and significant findings and experiences in the field of Cloud, Big Data and Analytics research and practice.
    Topics of the session include, but is not limited to, the following:
    - Data Driven Decision Making;
    - Competition and Intelligence, Competing on Analytics;
    - Data Driven Marketing and Decision Making;
    - Creativity and Innovativeness based on Big Data;
    - Managing Analytical People;
    - Building an Analytical Capability;
    - Cloud and Big Data Applications (Marketing, Logistics, Finance, Banking, Insurance, HR, Government, People, Culture, Communication, Leadership, Performance);
    - Temporal Big Data;
    - Cloud-Based Business Intelligence;
    - Models, methods and tools for Big Data and Analytics;
    - Data mining, Text mining, Opinion Mining;
    - Cloud and Big Data Systems* Architectures;
    - Cloud Service Management and Decision Making;
    - Algorithms for Big Data Analysis/Processing;
    - Big Data Visualization.

  • Special Session 13: Option Trading & Strategies

    Rahul Sharma, Jaypee Business School, India. (rahul.sharma@jiit.ac.in)

    Risk is a distinguishing feature of all commodity and capital markets. Prices of all commodities are subject to variation over time in keeping with prevailing demand and supply conditions. Similarly prices of different currencies, shares and debentures or other securities are also subject to continuous change. Those who are dealing with any one or more of these commodities or securities or currencies are continually exposed to the peril of risk. Derivatives, thus, came into being primarily for the reason of the need to eliminate price risk. Derivatives can be classified into forward, futures, options and swaps agreement.
    The Indian securities market witnessed the commencement of trading in equity derivatives in June 2000. In July 2001, trading in options in individual securities commenced. In India, Options were traditionally traded on the OTC market with the names teji, mandi, teji-mandi, call, put etc. After knowing all these a natural question is how a trader should use different types of options. The answer is that the choices a trader makes depend on the trader*s judgement about how prices will move. This special session on Options Trading & Strategies will provide the audience with not only an idea of basic concepts but also their practical application.
    This session aims to enhance the competitiveness of participants in the financial industry. Participants learn from both the rich practical experience of the faculty, as well as from the diverse experience of fellow learners. It will provide an ideal platform for gaining new insights in order to be successful.
    The contents of this special session will be as follows:
    - Understanding Call & Put Option;
    - Risk - Return of Option Buyer & Seller;
    - Payoff Matrix;
    - Graphical Analysis;
    - Bullish & Bearish Strategies;
    - Strip & Strap Strategies;
    - Straddle & Strangle Strategies;
    - Calendar Spread Strategies;
    - Butterfly & Condor Strategies;
    - Box Strategy / 4 Leg Strategies;
    - Appropriate time to Enter and Exit Market;
    - Suitable Market Situation for Execution;
    Dr. Rahul Sharma has more than 14 years of experience in teaching of Management Programme at institutions of repute such as Jaypee Business School, Noida, Institute of Charted Financial Analysts of India, Hyderabad and St. John's College, Agra. He is actively involved in training executives in different Management Development Programmes including in-company training programmes like JIL Information Technology Limited. He has number of publications to his credit in reputed peer-reviewed national and international journals/ books along with presentation of papers in national and international conferences. He also writes regular columns for local News Papers on topics like financial planning, Budget, and also sometimes on politics.

  • Special Session 14: Re-skill or perish: The changing paradigms in Indian technology workforce

    Shekhar Sanyal, The Institution of Engineering and Technology, India. (ssanyal@theiet.in)

    No matter how much we dislike it, automation and AI are here to stay and they are rendering a significant percentage of the current workflow and workforce redundant. The software industry in India that is estimated to provide employment directly to nearly 4 million people is under server pressure to automate routine jobs such as testing and bug fixing, cloud and digital implementation. If reports are to be believed 50% of 3.9 million workforce will become irrelevant in the next 3-4 years and the ones affected will be the mid-low level roles. The biggest concern plaguing Indian tech CEOs has been the lack of skills demonstrated by their employees to think, engage and deliver to customers. Vishal Sikka, CEO of Indian ITES major Infosys and Srinivas Kandula, CEO of Capgemini have been among the first to speak about these skill challenges of middle level roles where 65% are considered un-trainable for the skills in demand. With job cuts looming large and a million engineers graduating every year, we are looking at an unemployment rate of grave proportions if we do not act quickly. This special session will include discussions around what educational institutions should focus on in short-medium term to address skill-building at their ends, skills in demand and how to go about gaining them as well as avenues for industry-academia collaborations.
    Shekhar Sanyal is Country Head and Director of The Institution of Engineering and Technology, India (www.theiet.org).

  • Special Session 15: Research on Commodity derivatives markets: An Indian Perspective

    Shriram Anil Purankar, Jaypee Business School, India. (shriram.purankar@jiit.ac.in)

    The global financial crisis of 2008 and Dotcom bust prior to that in the last decade taught investors/traders the most basic principal of investment - not to bank on stock market alone but diversify their investments/portfolio. Typically, alternative investment options other than stock or bonds include commodities, real estate, private equity, and hedge funds. After the dotcom crash, &commodity futures* increasingly emerged as an alternative investment option and an appropriate asset class for investors in the long-term. Therefore, similar to understanding of equity stocks of capital market, understanding financial characteristics of commodities is very essential. Financial characteristics mainly stylized behavior of commodity prices, cross-commodity relationship, long-term co-movement among commodity prices, its cross co-movement with traditional asset class (like stocks, index, bonds and exchange rates), and dependence on various phases of worldwide business cycles is highly crucial for all classes of market participants.
    - Background of Indian Commodity Markets;
    - Theme of research on commodity markets using daily data;
    - Models used in research on commodity markets and their applicability;
    ﹛﹛o Cointegration;
    ﹛﹛o Granger*s Causality;
    ﹛﹛o Vector Error Correction Model;
    - Applications for traders and speculator who invest in the commodity derivatives market in India;
    Mr. Shriram Anil Purankar holds a Masters Degree in Business Administration (MBA) from Symbiosis International University. He has done his Bachelors degree in Electrical Engineering from Purdue University, Indiana, USA.
    He comes with a total of 5 years of experience across industries such as Financial services and Commodity Trading and academia. He has worked with companies like Fannie Mae, Freddie Mac, SBI Life Insurance and was working with Glencore prior to joining Japyee Business School as a faculty.
    His interests lie in the area of Import-Export Management, Economics, International Business and Commodity Trading. He is currently pursuing his PhD in Management from Jaypee Institute of Information Technology, Noida.

  • Special Session 16: The third special session of Data Acquisition and Management for Traceability Analytics (IDAMTA)

    Jing He, Victoria University, Australia. (jing.he@vu.edu.au)
    Bo Mao, Nanjing University of Finance and Economics, China. (bo.mao@njue.edu.cn)
    Hai Liu, School of Computer, South China Normal University, China. (liuhai@scnu.edu.cn)

    1. Overview
    In the era of wireless technology, robotics, web service, there are many computing technologies being introduced. With the recent development and progress of IoT (Internet of Things), it is possible to get information about how a system is operation and its real-time status in details. For example, RFID can track the distribution of goods, different sensors can monitor the environment, and GPS can send the location and time back. Based on the information, we could have a log for the monitored system and implement the trace-ability analysis. Trace-ability is the ability to verify the history, location, or application of an item. It is especially critical for some industries such as food processing, logistics, supply chain and e-business. The two key technologies for the trace-ability analysis are data acquisition and management. In the age of cloud computing, they are two promising fields. Although there are several solutions already in place, many challenges remain to be investigated and tackled.
    The purpose of this special session is to not only discuss the existing topics in data acquisition and management for traceability analysis, but also focus on the new rapidly growing area from the integration of big data analytics and traceability analysis for significant mutual promotion. We intend to discuss the recent and significant developments in the general area and to promote cross-fertilization of techniques. The participants in this special session will benefit as they will learn the latest research results of data acquisition and management of IoT and big data analytics based trace-ability system, as well as the novel idea of merging them.
    2. History of this workshop
    We have successfully organized one workshop at the 2nd ITQM conference at Moscow. Seven authors have shown up and given the presentation at Higher School of Economics. In 2015, the special session on traceability analysis was hold at the 3rd ITQM conference at Brazil where 5 papers were presented. In 2016, we continued the special session in Korea and 6 papers were accepted and four authors represented their work.
    3. Goal
    The special session is interdisciplinary and provides a platform for researchers, industry practitioners and students from engineering, sociology, computer science, information systems share, exchange, learn, and develop new research results, concepts, ideas, principles, and methodologies, aiming to bridge the gaps between paradigms, encourage interdisciplinary collaborations, advance and deepen our understanding of IoT, big data analytics, traceability and the related data management method.
    There are two major topics of interest for this workshop: (1) Traceability data acquisition, (2) Data management and mining for the generated IoT data. Comprehensive tutorials and surveys are also expected. The general topics include, but are not limited to
    -Traceability Data Management;
    ﹛﹛o Visualization of IoT based Traceability system;
    ﹛﹛o Intelligent Data Fusion and Aggregation;
    ﹛﹛o Storage Management Technologies;
    ﹛﹛o Deep Learning;
    ﹛﹛o Big (Sensor) Data;
    ﹛﹛o Pattern Discovery;
    ﹛﹛o Multiple Representation Structure;
    ﹛﹛o Spatiotemporal Data Management and Analysis;
    - IoT based Traceability Data Acquisition;
    ﹛﹛o RFID Related Technologies;
    ﹛﹛o Wireless Sensor Network;
    ﹛﹛o Online Quality Estimation; ;
    ﹛﹛o Data Acquisition based on Smart Phones;
    ﹛﹛o User Analysis based on Social Network;
    More specially, details about recommended topics include, but are not limited to, the following:
    - Advanced Cloud Computing Solutions for Traceability Systems;
    - Agent-based approaches to Cloud Services for Traceability Systems;
    - Self-Organizing Agents for Service Composition and Orchestration in Trace-ability Systems;
    - Self-service cloud and self-optimization in Traceability Systems;
    - Trust in Cloud computing for Traceability System;
    - Trace-ability Systems related Workflow Design and Optimization;
    - Emerging Areas of Trace-ability Applications in the frontier of web and cloud computing;
    - Advanced Cloud Computing Solutions for Traceability Systems;
    - Agent-based approaches to Cloud Services for Traceability Systems;
    - Self-Organizing Agents for Service Composition and Orchestration in Traceability Systems;
    - Self-service cloud and self-optimization in Traceability Systems - Cloud resource allocation approaches;
    - Privacy Preserving in Cloud Computing for Traceability Systems;
    - Trust in Cloud computing for Traceability Systems;
    - Trace-ability Systems related Workflow Design and Optimization;
    - Advanced IT Solutions for Traceability Systems;
    - Agent-based approaches to ICT Services for Traceability Systems;
    - Self-Organizing Agents for Service Composition and Orchestration in Traceability Systems;
    - Self-service cloud and self-optimization in Traceability Systems;
    - Information resource allocation approaches;
    - Privacy Preserving for Traceability Systems;
    - Trust in Cloud Computing for Traceability Systems;
    - Trace-ability Systems related Workflow Design and Optimization;
    - Emerging Areas of Traceability Applications in the frontier of web and cloud computing.
    4. Special issues
    The selected paper will be recommended to International Journal of Information Technology & Decision Making (SCI) and the journal of computers (EI).
    5. Short Bio for co-chairs
    Dr. Jing He is currently a full Professor in the College of Engineering and Science, Victoria University. She has been awarded a PhD degree from Academy of Mathematics and System Science, Chinese Academy of Sciences in 2006. Prior to joining to Victoria University, she worked in University of Chinese Academy of Sciences, China during 2006-2008. She has been active in areas of Data Mining, Web service/Web search, Spatial and Temporal Database, Multiple Criteria Decision Making, Intelligent System, Scientific Workflow and some industry field such as E-Health, Petroleum Exploration and Development, Water recourse Management and e-Research. She has published over 40 research papers in refereed international journals and conference proceedings including ACM transaction on Internet Technology (TOIT), IEEE Transaction on Knowledge and Data Engineering (TKDE), Information System, The Computer Journal, Computers and Mathematics with Applications, Concurrency and Computation: Practice and Experience, International Journal of Information Technology & Decision Making, Applied Soft Computing, and Water Resource Management. She received research fund from ARC early career researcher award (DECRA), ARC discovery, ARC Linkage, National Science Foundation of China, Youth Science Fund of Chinese Academy of Sciences, Grant-in aid for Scientific Research of Japan. She served on three program committees of international conferences: International Conference on Computational Science (ICCS), The IEEE International Conference on Data Mining (ICDM), and International Symposium on Knowledge and Systems Science (KSS), as well as the workshop co-chair on APWeb 2008, WI 2009, MCDM 2009. In addition, she has been serving as external reviewers for several international journals and conferences, such as Management Science, The Computer Journal, IEEE Transaction on Systems, Man, Cybernetics, International Journal of Information Technology and Decision Making, Journal of Management Review (in Chinese), Decision Support System, Science (in China), ICDE, ICCS, ICDM, KSS, WISE, HIS, APWeb etc.
    Dr. Bo Mao is currently an Associate Professor Nanjing University of Finance and Economics, China. He has been awarded a PhD degree from Royal Institute of Technology-KTH, Sweden in 2012. He has been active in areas of 3D City model generalization, Online Visualization, Data Mining, Spatial and Temporal analysis, and some industry field such as Food trace-ability system and e-business. He has published over 30 research papers in refereed international journals and conference proceedings including ISPRS Journal of Photogrammetry and Remote Sensing (ISPRS J), Computers, Environment and Urban Systems (CEUS), Science China Earth Sciences, World Wide Web Journal (WWWJ), International Conference on Geographic Information Science (GIScience), ACM conference on Recommender systems (RecSys). He received research fund from National Science Foundation of China and Jiangsu Doctor Convergence Program. He served on program committees of International conference on Advanced Data Mining and Applications (ADMA). In addition, He has been serving as external reviewers for several international journals and conferences, such as ISPRS J, CEUS, IJGIS, ADMA etc.
    Dr. Hai Liu is now a researcher at south china normal university. His research interests include Machine learning, Data mining, Ontology Engineer (Description Logic),Classification Clustering, Matrix Factorization, Topic modeling, and Recommender Systems.

  • Special Session 17: Digital and Global Business Communication

    Neerja Pande, Indian Institute of Management Lucknow, India. (neerja@iiml.ac.in)

    Global businesses, whether manufacturing or services-based are increasingly dependent on collaborative work between virtual or globally-distributed teams. These virtual teams operate as groups of individuals who work across time, space and organizational boundaries with links strengthened by webs of communication technology. Advanced digital communication technologies have been shown to facilitate and enhance collaboration by bridging physical and temporal barriers among distributed knowledge workers.
    Such technologies are both synchronous (real-time) and asynchronous, like internet-based tele and web conferencing service e.g. Skype, cloud-based file/work sharing and collaboration platforms e.g. Google Drive and Hightail, messaging and chat groups e.g. WhatsApp and BBM, and a host of social networking sites e.g. Twitter and Linked In. Evolving beyond emails and instant messaging, the rapid growth of a diverse array of digital communication technologies has significant and long-term implications for how businesses and teams function.
    Topics of interest can include, but are not limited to the following:
    - Business Communication in a Digital, Social, Mobile World: The Changing Paradigms;
    - Virtual Team Communication: Opportunities and Challenges;
    - Breach of Privacy and Trust: Ethical & Legal Challenges in Global Business Communication;
    - Cross Cultural Competency in a Diverse Global World: Issues and Challenges;
    - Changing Strategies for Business Communication on Social Networks;
    - New Language and Vocabulary of a hyper mediated Global World;
    - From High Touch to High Tech: Accommodation Strategies for Business Communication;

  • Special Session 18: High Performance Data Analysis

    Vassil Alexandrov, ICREA Research Professor in Computational Science at Barcelona Supercomputing Centre, Spain. (vassil.alexandrov@bsc.es)
    Ying Liu, University of Chinese Academy of Sciences, China. (yingliu@ucas.ac.cn)

    Big data is an emerging and active research topic in recent years. There is a clear need to analyze huge amounts of unstructured and structured complex data, historic data as well as data coming from real time feeds (e.g. Business data, meteorological ones from sensors, remote sensing data, etc.). This is beyond the capability of traditional data processing techniques and tools. The challenges include data capture, storage, search, sharing, transfer, analysis, and visualization. In order to meet the requirement of big data analysis, computational science and high performance computing methods and algorithms are in real demand to solve the above challenges, including scalable mathematical methods and algorithms, parallel and distributed computing, cloud computing, etc. This workshop will focus on the issues of high performance data analysis. Theoretical advances, mathematical methods, algorithms and systems, as well as diverse application areas will be in the focus of the special session.
    This year the session aims at organizing a special theme session exploring emerging trends in high performance data analysis. We welcome papers on all aspects of high performance data analysis, including, but not limited to:
    ﹛- Data processing exploiting hybrid architectures and accelerators (multi/many-core, CUDA-enabled GPUs, FPGAs);
    ﹛- Data processing exploiting dedicated HPC machines and clusters;
    ﹛- Data processing exploiting cloud;
    ﹛- Deep learning;
    ﹛- High performance data-stream mining and management;
    ﹛- Efficient, scalable, parallel/distributed data mining methods and algorithms for diverse applications;
    ﹛- Advanced methods and algorithms for big data visualization;
    ﹛- Parallel and distributed KDD frameworks and systems;
    ﹛- Theoretical foundations and mathematical methods for mining data streams in parallel/distributed environments;
    ﹛- Applications of parallel and distributed data mining in diverse application areas such as business, science, engineering, medicine, and other disciplines;
    Program Committee
    ﹛* Haihua Shen, University of Chinese Academy of Sciences, China;
    ﹛* Steve Chiu, Idaho State University;
    ﹛* Jayaprakash Pisharath, Intel Cooperation;
    ﹛* Yang Gao, Baidu.com;
    ﹛* Jun Xu, Institute of Computer Techniques, Chinese Academy of Sciences;
    ﹛* Liu Ying, University of Chinese Academy of Sciences, China;
    ﹛* Vassil Alexandrov, ICREA-BSC, Spain;
    ﹛* Svetlana Chuprina, Perm University, Russia;
    Vassil Alexandrov is ICREA Research Professor in Computational Science at BSC (Barcelona Supercomputing Center) since September 2010. He holds a MSc in Applied Mathematics from Moscow State University, Russia (1984) and a PhD in Parallel Computing from Bulgarian Academy of Sciences (1995). He is a member of the Editorial Board of the Journal of Computational Science and Guest Editor of Mathematics and Computers in Simulation. He has published over 110 papers in renowned refereed journals and international conferences and workshops in the area of his research expertise. His research interests are in the area of Computational Science encompassing Parallel and High Performance Computing, Scalable Algorithms for advanced Computer Architectures, Monte Carlo methods and algorithms.
    Ying Liu received her B.S. degree from Peking University, China, in 1999, the M.S. degree and the Ph.D. degree from Northwestern University, Evanston, IL, USA, in computer engineering in 2001 and 2005, respectively. She is currently a professor in School of Computer and Control, University of Chinese Academy of sciences. She also holds an adjunct appointment with the Key Lab of Big Data Mining and Knowledge Management of Chinese Academy of Sciences. Her research interests include data mining, high-performance computing, deep learning, etc. She is a member of the Editorial Board of the Data Science Journal and Guest Editor of Annuals of Data Science. She has published 60 papers in renowned refereed journals and international conferences. She served as the workshop chairman for the workshop on High Performance Data Analysis on ITQM 2014, 2015, 16 and served as the workshop chair for workshop on High Performance Data Mining with 7th International Conference on Data Mining (ICDM), 2007, and workshop on High Performance Data Mining with 7th International Conference on Computational Science (ICCS), 2007.

  • Special Session 19: Consensus and collaborative decision-making

    Enrique Herrera-Viedma, Granada University, Spain. (viedma@decsai.ugr.es)
    Gang Kou, Southwestern University of Finance and Economics, Chengdu, China. (kougang@swufe.edu.cn)
    Florin Filip, The Romanian Academy, Romania. (ffilip@acad.ro)
    Francisco Javier Cabrerizo, Granada University, Spain. (cabrerizo@decsai.ugr.es)
    Ignacio Javier P谷rez, Cadiz University, Spain. (ignaciojavier.perez@uca.es)

    Consensus is an important area of research in group decision making and multi-agent decision making. Consensus is defined as a state of mutual agreement among members of a group where all opinions have been heard and addressed to the satisfaction of the group. A consensus reaching process is a dynamic and iterative process composed by several rounds where the experts/agent express, discuss and modify their preferences. A particular case of group decision making is the collaborative decision making, i.e., a decision making context where we have a many individuals who develop a collaborative behaviour in order to achieve a solution. Many times this kind of collaborative decision making appears inside social media and in presence of many quantity of data (big data contexts).
    Both consensus building (CB) and collaborative decision- making (CDM) can be carried out in face-to-face meetings or in computer supported settings. The modern Information and communication technologies(I&CT), in particular, business intelligence and analytics (BI&A), social networks, and mobile cloud computing enable effective e-collaboration activities. A pacing second generation of such activities can be noticed world wide. The session is meant to include papers presenting new methodologies for CB and CDM as applied to various societal and economic fields. Also papers on second generation of e-collaboration activities and systems are expected to be proposed.
    The objective of the proposed session is to highlight the ongoing research on new methodologies for CB processes and CDM as applied to various societal and economic fields. Also papers on second generation of e-collaboration activities and systems are expected to be proposed. Focusing on theoretical issues and applications on various domains, ideas on how to solve CB processes and CDM, both in research and development and industrial applications, are welcome. Papers describing advanced prototypes, systems, tools and techniques and general survey papers indicating future directions are also encouraged. Topics appropriate for this special session include, but are not limited to:
    - Preference modelling in collaborative decision making;
    - Collaborative decision making system applications;
    - Consensus in multi-agent decision making;
    - Collaborative decision making system for big data;
    - Collaborative decision making in Web 2.0 frameworks;
    - Intelligent collaborative decision making systems;
    - Collaborative decision making in presence of incomplete information;
    - Collaborative decision making in dynamic contexts;

  • Special Session 20: Best paper review session for Information Technology and Quantitive Management 2016

    Sungbum Park, Graduate School of Management of Technology, Hoseo University, Asan, Republic of Korea. (parksb@hoseo.edu)

    Paper session consist of the best papers handpicked by the Conferecnce Commitee and the Program Chair based upon a rigid criteria reflecting the interests and domains of ITQM 2016. Although the best papers chosen were cross-reviewed to fulfill the integrity of the submission and review process of ITQM, the session is held for these papers benefit from further development.
    In the session, the authors receive feedback for further development of their papers and exchange research experiences with other participants who share similar research interests. Each participant is assigned the discussant role to provide constructive feedback to the authors.
    The main themes of this special session include:
    - Promising ICT Transfer Fields for Promotion of Micro-Startups;
    - Exploring Potential Users of Patents for Technology Transfer: Utilizing Patent Citation Data;
    - Multivariate skew normal copula for non-exchangeable dependence;
    - A Study of the Connected Smart Worker's Techno-Stress;
    - Analysis on the Ramp Reset Operation by Measuring the Surface Charge Distribution in an ac PDP cell;
    - Analysis of Carbon Dioxide and Cloud Effects on Temperature in Northeast China;
    - Similarity-based Change Detection for RDF in MapReduce;
    - The Effect of Organizational Structure on Open Innovation: A Quadratic Equation;
    - Assessing the Impact of Open and Closed Knowledge Sourcing Approach on Innovation in Small and Medium Enterprises.

  • Special Session 21: SMEs in digital era: Innovating its manufacturing and business processes

    Min Ho Ryu, Graduate School of Management of Technology, Hoseo University, Asan, Republic of Korea. (ryumh12@hoseo.edu)

    Industry has changed significantly in last few years as new technologies such as AI (artificial intelligence), machine learning, IoT(internet of things), and big data are widely used. To remain competitive, it is essential for firms to adopt new methodologies of innovation, manufacturing, procurement, and business process. However, SMEs are in an especially difficult position for innovation and internationalization as their resources are more limited than larger enterprises. And thus they might need greater support in their entrepreneurial activities. This special session will look for practical and empirical approaches that can assist practitioners, academics, and researchers, in understanding how SMEs continue innovation and transforming their manufacturing and business process for adjusting to the forthcoming intelligence-based paradigm.
    Appropriate themes might thus include (but are not restricted to):
    - Role of technology management in SMEs;
    - Organization management of SMEs;
    - Competitive Strategies for SMEs;
    - Innovation process and performance of SMEs;
    - Knowledge transfer and sharing in SMEs;
    - Technology and its role (impact on performance, governance, etc.);
    - Critical success factors of SMEs;
    - Smart manufacturing of SMEs;
    - Adoption of technology (e-learning systems, ERP, etc.) & success factors;
    - The role of new technology on success of SMEs (business models, acculturation, social media);



    Workshops:

  • Workshop 01: 3rd Workshop on Scientific data analysis and decision making

    Dengsheng Wu, Institute of Policy and Management, Chinese Academy of Sciences, China. (wds@casipm.ac.cn)
    Yuanping Chen, Computer Network Information Center, Chinese Academy of Sciences, China. ( ypchen@cashq.ac.cn)

    As E-science has emerged as a persistent and increasingly large part of the research enterprise,scientists are exploring new roles, services, staffing, and resources to address the issues arising from this new mode of research. Scientists use computer modeling and simulation programs to test and produce new theories and experimental techniques, often generating and accumulating vast amounts of data. Ideally, that data could be shared with other scientists, for re-use and re-analysis, ultimately speeding up the process of scientific discovery.The collection and utilizationof scientific dataare the two primary features that characterize e-Science. The scientific data are generated by different aspects and departments in the management activities of research institutions and are decentralized-managed and separated-stored, which generates the difficult to share and manage thescientific data. Furthermore, the global sharing of data has promoted interdisciplinary teamwork on complex problems and has enabled other researchers to use data for different purposes. The main objective of this workshop is to provide researchers and practitioners an opportunity to share the most recent advances in the area ofdata science and decision analysis for e-science. The workshop aims to create a communication platform for researchers to share the recent and significant developments in the general area.
    Topics of interest include, but are not limited to, the following:

    - Metadatastandardof scientific data;
    - Scientific data quality analyzing;
    - Scientific data integration and sharing;
    - ETLprocess for scientific data;
    - Scientific data visualization;
    - Decision analysis modeling from scientific data;
    - Network analysis from scientific data;
    - Bibliometrics analysis from scientific data;
    - Scientometrics from scientific data;

  • Workshop 02: The 10th International Workshop on Computational Methods in Energy Economics (CMEE2017)

    Lean Yu, School of Economics and Management, Beijing University of Chemical Technology, China. (yulean@amss.ac.cn)
    Ling Tang, School of Economics and Management, Beihang University, China. (tangling_00@126.com)
    Kaijian He, School of Business, Hunan University of Science and Technology, China. (kaijian.he@my.cityu.edu.hk)

    As is known to all, energy economics is a subfield of economics that focuses on energy relationships as the foundation of all other relationships. The field can arise from a number of disciplines, including economic theory, financial economics, computational economics, statistics, econometrics, operational research and strategic modeling. A wide interpretation of the subject includes, for example, issues related to forecasting, financing, pricing, investment, development, conservation, policy, regulation, security, risk management, insurance, portfolio theory, taxation, fiscal regimes, accounting and the environments. In these listed issues there are a large number of computational problems to be solved for the energy systems, particular for energy risk measurement and management. This will be the eighth workshop for such a subject that provides a premier and open forum for the dissemination of innovative computational methods as well as original research results in energy economics and energy risk management.
    In order to provide an academic exchange platform, the First International Workshop on Computational Methods in Energy Economics (CMEE 2007) was held in Beijing on May 27-30, 2007. Subsequently, the Second, Third, Fourth, Fifth, Sixth, Seventh, Eighth, Ninth International Workshop on Computational Methods in Energy Economics (CMEE 2008, CMEE 2009, CMEE 2010, CMEE 2011, CMEE 2012, CMEE 2013, CMEE 2015, and CMEE 2016) were held in Nanjing, Sanya, Huangshan, Kunming, Harbin, rio De Janeiro (Brazil), Asan (Korea) on June 27-30, 2008, April 24-26, 2009, May 28-31, 2010, April 15-19, 2011, June 24-26, 2012, May 16-18, 2013, July 21-24, 2015, and August 16-18, 2016. To promote the idea-exchange and discussion of this field, the Tenth International Workshop on Computational Methods in Energy Economics (CMEE 2017) will be held in New Dehli, India, December 8-10, 2017. The organizers solicit all interested academic researchers and industrial practitioners to submit their recent research results to this workshop within the scope of the following topics.
    The workshop will provide an open forum for research papers concerned with the computational problems on energy economics and energy risk management, including economic and econometric modeling, computation, and analysis issues in energy systems. The workshop will focus on, but not limited to, the following topics:

    - Forecasting models for energy prices (oil, coal, gas, electricity);
    - Pricing models in energy markets (mean reversion, jump diffusion);
    - Investment analysis models in energy projects (portfolio theory);
    - Econometric modeling for energy demands;
    - Energy and environment policy modeling;
    - Modeling strategic behavior for energy security;
    - Hybrid energy-economy models for energy policy simulation;
    - Statistical analysis of energy cost, energy consumption and economic growth;
    - Energy risk management (risk measurement, hedging strategy and instruments);

  • Workshop 03: The 2th workshop on Outlier Detection in Financial Data Streams & Big Data and Management Science

    Aihua Li, Central University of Finance and Economics, China. (aihuali@cufe.edu.cn)
    Zhidong Liu, Central University of Finance and Economics, China. ( liu_phd@163.com)

    With the development of information technology, more and more data are stored in many fields and different industries. Big data is not only a definition for data but also for the technology and idea to deal with big data. For decision makers, there are still drowned in data but lack of knowledge. Management Science is a subject to solve the problem in management with qualitative and quantitative method. New idea and method in Big data put new energy for management science. Thus, there are new methods and technology in the field big data and management science in recent years.
    Data stream is one of the important data types in the financial sector, and it is with the following characteristics, such as arriving quickly, unstable, huge and so on. Traditional analysis methods and theories can*t meet the requirements of financial data stream analysis due to these characteristics. This workshop focuses on how to detect abnormal pattern in data streams especially in financial data streams. In addition, theoretical system and methods of outlier detection would be needed to be proposed. This topic includes outlier detection theory, method and application for financial data stream based on domain knowledge, outlier detection for data stream and empirical analysis.
    The topics and areas include, but not limited to:

    - Outlier detection based on classification method;
    - Outlier detection based on clustering method;
    - Outlier detection based on domain knowledge;
    - Data preprocessing method for data streams;
    - Domain knowledge and risk management in finance;
    - Data mining and knowledge discovery in finance;
    - Outlier detection method in other field;
    - Quantitative management and decision making in fianc谷;
    - Method, model and application in big data and management science;

  • Workshop 04: The Fifth Workshop on Optimization-based Data Mining

    Yingjie Tian, Chinese Academy of Sciences Research Center on Fictitious Economy and Data Science, China. (tyj@ucas.ac.cn)
    Zhiquan Qi, Chinese Academy of Sciences Research Center on Fictitious Economy and Data Science, China. ( qizhiquan@ucas.ac.cn)
    Yong Shi, College of Information Science and Technology, University of Nebraska at Omaha, USA. ( yshi@unomaha.edu)

    The fields of data mining and mathematical programming are increasingly intertwined. Optimization problems lie at the heart of most data mining approaches. For last several years, the researchers have extensively applied quadratic programming into classification, known as V. Vapnik*s Support Vector Machine, as well as various applications. However, using optimization techniques to deal with data separation and data analysis goes back to more than many years ago. According to O. L. Mangasarian, his group has formulated linear programming as a large margin classifier in 1960*s. In 1970*s, A. Charnes and W.W. Cooper initiated Data Envelopment Analysis where a fractional programming is used to evaluate decision making units, which is economic representative data in a given training dataset. From 1980*s to 1990*s, F. Glover proposed a number of linear programming models to solve discriminant problems with a small sample size of data. Then, since 1998, the organizer and his colleagues extended such a research idea into classification via multiple criteria linear programming (MCLP) and multiple criteria quadratic programming (MQLP), which differs from statistics, decision tree induction, and neural networks. So far, there are more than 100 scholars around the world have been actively working on the field of using optimization techniques to handle data mining and web intelligence problems. This workshop intends to promote the research interests in the connection of optimization, data mining and web intelligence as well as real-life applications.

  • Workshop 05: Survey Design and Standardization Using SPSS & AMOS

    Rajnish Kumar Misra, Jaypee Business School, Noida, India. (rajnish.misra@jiit.ac.in) or ( rajnish_misra@yahoo.com)

    Conducting research requires lot of rigor with respect to research design, methodology, and analysis. Survey becomes the necessary component of any quantitative research. Survey/ questionnaire development requires statements in the form of questions/thought for collecting data. To make the data collected using this tool as valid and reliable, standardization techniques using advanced software SPSS每AMOS are applied. The outcome of this analysis leads to a standardized and empirically accepted research tool.
    Workshop Objective: The objective of this Workshop is facilitating participants in understanding the nuances in questionnaire development using exploratory and confirmatory factor analysis with the help of SPSS and AMOS Software.
    Learning Outcome:
    At the end of this workshop, a participant will be able to:
    1. Analyzing the data using SPSS-AMOS;
    2. Standardization of the questionnaire;
    This workshop aims at helping researchers in various domains of management to sharpen their research tools used for data collection. Researchers may discuss their instruments along with collected data as well as how to standardize it for attaining meaningful quantitative research.


  • Copyright © ITQM 2017  All rights are reserved.