Funding

Self-funded

Project code

OS&P4921024

Department

School of Organisations, Systems, and People

Start dates

October, February and April

Application deadline

Applications accepted all year round

Applications are invited for a self-funded, 3 year full-time or 6 year part-time PhD project.

The PhD will be based in the School of Organisations, Systems and People and will be supervised by Professor Mark Xu, Dr Salem Chakhar and Dr Muhammad Shakir.

The work on this project will involve:

  • Based on interdisciplinary across business environment dynamism, complex decision making and Artificial Intelligence
  • Critical evaluation of the status of the art of research and practice in the specified domains
  • Develop theoretical constructs to add knowledge as well as practical solutions/prototypes
  • Aim to publish in top quality journal/international conferences and support relevant grant applications

Emerging technologies, such as artificial intelligence (AI), machine learning (ML), drive many businesses changes through digital transformation, managers are faced with increased uncertainty and dynamic environment in decision making, usually measured by complexity of the environment variables and the rate of change of these variables. The challenges can be viewed from two perspectives. First, on managerial decision making, there are an increasing number of triggers, speculative information, fake news, misinformation and disinformation (Petratos, 2021) that senior managers need to attend to. Second, on organisational decision making, it is well documented that routine tasks and decisions can be handled efficiently by automation and AI, as the variables for these tasks and decisions tend to be frequent, repetitive, relatively structured, hence rules and programs can apply. Whereas nonroutine tasks and decisions in high uncertain situations pose great challenges for AI because a) the situation is unfamiliar, unique, infrequent and heterogeneous, hence variables cannot be easily defined; b) relevant data and knowledge may not be readily available and accessible; c) algorithms may not incorporate all assumptions and models; d) managers may not trust the answers given by the AI due to various reasons - lack of transparency, incorrect decisions, high risks, unethical, and unclear responsibilities (Barredo Arrieta et al., 2020; Mikalef,et al. 2022; Vaia, et al. 2022 ).  As a result, there is growing research suggesting that AI and algorithms should be used in caution as this may lead to algorithmic/automation bias (Ferguson, 2017, Rai et al 2019, Sen et al 2020).

 

This research aims to advance how AI can be better developed and deployed for managerial decisions in uncertainty. The candidate should select one of the two topics listed below: 

 

Topic 1: To develop a framework of responsible AI for management decision with consideration of organisational and technical entities including digital governance, risk management and ethics, etc. test and validate the framework using appropriate approaches.

 

Topic 2. To design and develop an AI decision tool to support management decisions in uncertain situations with consideration of embedding responsible AI elements, adopting emerging Open AI algorisms, test and validate the tool in a real or simulated decision situation.

 

The focus and scope of both - topics is closely related to each other, however, the first topic will lead to the development of a conceptual framework of a responsible AI informed by emerging AI technologies, new organisational settings and governing protocols,  while the second topic will focus on the design and validation of - a responsible AI based decision tool for complex managerial decisions.

 

References 

 

A. Rai and P. Constantinides and S. Sarker (2019). Next Generation Digital Platforms: Toward Human-AI Hybrids. MIS Quarterly, 43(1), iii--ix.

A.G. Ferguson (2017). The Rise of Big Data Policing: Surveillance, Race, and the Future of Law Enforcement .NYU Press. doi: 10.2307/j.ctt1pwtb27}

Mikalef, P., Conboy, K., Lundström, J. E., & Popovič, A. (2022). Thinking responsibly about responsible AI and ‘the dark side’ of AI, European Journal of Information Systems, 31(3), 257-268. https://doi.org/10.1080/0960085X.2022.2026621

Petratos, P. N. (2021). Misinformation, disinformation, and fake news: Cyber risks to business. Business Horizon, 64(6), 763-744.  https://doi.org/10.1016/j.bushor.2021.07.012 

S. Sen,  D. Dasgupta, K.D. Gupta (2020). An Empirical Study on Algorithmic Bias. In: IEEE 44th Annual Computers, Software, and Applications Conference (COMPSAC), pp. {1189-1194. doi: 10.1109/COMPSAC48688.2020.00-95

Vaia, G., Arkhipova, D. & DeLone, W. (2022). Digital governance mechanisms and principles that enable agile responses in dynamic competitive environments. European Journal of Information Systems, 31(6), 662-680. https://doi.org/10.1080/0960085X.2022.2078743

 

 

Fees and funding

Visit the research subject area page for fees and funding information for this project.

Funding availability: Self-funded PhD students only. 

PhD full-time and part-time courses are eligible for the UK Government Doctoral Loan (UK and EU students only – eligibility criteria apply).

Bench fees

Some PhD projects may include additional fees – known as bench fees – for equipment and other consumables, and these will be added to your standard tuition fee. Speak to the supervisory team during your interview about any additional fees you may have to pay. Please note, bench fees are not eligible for discounts and are non-refundable.

Entry Requirements

You'll need a good first degree from an internationally recognised university (minimum upper second class or equivalent, depending on your chosen course) or a Master’s degree in Information Systems or Operational Research or Business/Management with extensive IT skills or experience. In exceptional cases, we may consider equivalent professional experience and/or Qualifications. English language proficiency at a minimum of IELTS band 6.5 with no component score below 6.0.

For Topic 1, candidates with management and stakeholder knowledge and experience are preferred.

For Topic 2, candidates with programming or modelling, or developing AI/Apps are preferred.

 

We’d encourage you to contact Professor Mark Xu (mark.xu@port.ac.uk) to discuss your interest before you apply, quoting the project code.

When you are ready to apply, please follow the 'Apply now' link on the Organisation Studies and Human Resource Management PhD subject area page and select the link for the relevant intake. Make sure you submit a personal statement, proof of your degrees and grades, details of two referees, proof of your English language proficiency and an up-to-date CV. Our ‘How to Apply’ page offers further guidance on the PhD application process. 

Please also include a research proposal of 1,000 words outlining the main features of your proposed research design – including how it meets the stated objectives, the challenges this project may present, and how the work will build on or challenge existing research in the above field. 

When applying please quote project code:OS&P4921024