Transcampus.com

advert
Home Instructors Journals ContactUs
Home

 

Instructors

 

Journals

 

Contact Us

 

JOURNAL OF RESEARCH IN NATIONAL DEVELOPMENT VOLUME 7 NO 1, JUNE, 2009

JUSTIFICATION AND THE CHOICE OF EXPERIMENTAL RESEARCH

 INSTRUMENTS IN PUBLIC ADMINISTRATION

 

S.O. Uhunmwuangho

Department of Political Science and Sociology

and

M. Osemeke

Department of Banking  and Finance, Western Delta University, Oghara, Delta State, Nigeria

 

Abstract  

The thrust of this paper is that going by the nature of Nigerian government and politics, which is a logical carry over from the pattern of politics in the third world, the difficulties of experimenting with human beings and of risking waste of public funds should be considered in any fruitful Public Administration research. The consequences of failure in any experimental research, which involves the public administration and the use of public funds are considerable; the political risks to the elected masters would often be greater than they would readily authorize. This must be especially true in the social sciences but the application extends quite easily to even Public Administration studies.

 

Key words: Justification, choice. experimental research, instruments.

 

 


Introduction

The experimental research is a method of data collection and testing of hypothesis through controlled experimentation with people.  In this method of data collection, the researcher manipulates and controls one or more independent variables for variation in response to the manipulation of the independent variable (Black, 1996).

 

Iin Public Administration, scientific research approaches administrative phenomena by studying, observing, analyzing, explaining and predicting. Administrative data are gathered, measured, tested and analysed such that findings and conclusions based on evidence are obtained.  Considerable research studies have been carried out in Public Administration, using several approaches, perspectives, foci and methods.

 

Regular experimental research inquiry began in Public Administration towards the end of the 19th century.  The New York Bureau of Municipal Research could be said to have blazed the trail in terms of research methodology, investigation, production of reports and prescriptions of measures for administrative improvement of governments (Aghayere, 1997).  Beginning from 1911, Public Administration became concerned with establishing itself as a science with distinct principles, methods, models and theories.  However, Social Science research comprises and utilizes certain methodologies.

 

The purpose of the experimental method, according to Cramer (2004), is to clearly establish the independent effect of the independent variable on the dependent variable. For example, in a non-experimental research, we may prove that economic factors cause labour unrest.  This statement however cannot be absolutely certain because economic factors alone may not be the determining forces that cause  labour unrest.  It might be sheer coincidence that labour unrest occurs when there is an economic crisis or there might be some intervening variables between economic crises and labour unrests.  Or it might be that both labour unrest and economic crises are symptoms of some unknown causes.

           

These research mysteries can be unfolded only in the experimental setting (Brown, 1980).  It is therefore in experimental research that we can clearly see the effect of an independent variable on a dependent variable.  However, it is almost impossible to conduct laboratory type of experiment in public administration researches. 

           

 

Therefore, experimental research in social science research is extremely limited as compared with that of physical sciences. There are four reasons for this.  Firstly, theories are not well established in social sciences, and since experimentation is guided by theory, it will be difficult to carry out effective experimentation until  concepts and theories are more clearly and precisely formulated.

 

Secondly, the inherent nature of social sciences places substantial difficulty in the way of factors and the extreme complexity of interrelations and reactions in environments in which human beings function clearly render it difficult to ensure that the experimenter is not over simplifying reality and is not combining or separating variables in an unrealistic way.  It is far from easy if it is possible at all to design an experiment, which follows for a finely balanced interplay of human and social factors, which approximates reality to a satisfactory and measurable degree.

 

Thirdly, the reality of experimental subjects and control in social sciences experimentation requires that whilst some variables are being manipulated, others are controlled and held constant.  A good deal of doubt may be cast upon the pureness of reaction of human groups; to what extent is the experimenter sure that his experimental group or control group are reacting “naturally”? . 

 

Lastly, there is the difficulty of mensuration, what units do we use to measure?  For example, what units does a researcher use in measuring creativity in management or the quantum of resistance to change? (Babie, 1973). 

 

Statement of the Problem

 Researchers in the Social Sciences are required to report their research findings objectively.  This requirement is not easy to attain.  To overcome this problem and meet the objectivity criteria, researchers have to choose the data collection method that would generate data that would at the same time meet the objectivity criteria. Also, the methods of data collection should be reliable and valid.

 

These criteria are met partially by the use of experimentation as a veritable tool in public administration research.  Experimentation as a tool of data sourcing and testing of hypothesis utilizes experiments with people – their behaviour and attitudes to certain phenomena.

 

It has the advantage of high degree of control of intervening variables that is present in other types of data collection methods in social science research. Consequently, the intervening questions for this paper include: how valid and reliable are the use of experimentation in social sciences studies?  Are the findings through experimentation devoid of flaws or mistakes?  If they are, in what ways or what  are the best methods for conducting experimental research in public administration?  And how do we limit or manage these problems?

 

Areas of Experimentation In Public Administration Research

 A good deal of the work of public administration is experimental in the sense that when it is embarked upon it is by no means known whether it will be a success and the initiators are, from the onset, prepared to scrap the work and try something else if it is not successful.  This is the “trial-and-error” method (Myrdal, 2004).

 

Although established routine administration is not within this category, much that is known can be considered experimental regardless of whether it is small or large in scope, whether it is the experimental layout of a new office, or whether it is of the political magnitude and administrative complexity of, say, the Central African Federation. Both are experiments.  That is to say, we do not know from the beginning how they will work and whether they will accomplish what we aim at, so we try them, observe them, and if they do not work we modify them or scrap them and try something else. They resemble experimentation in the natural sciences to the extent that the experimental design is consciously established, some of the elements are manipulated as the experiment proceeds, and the results are observed.  The absence of strict measurement and control makes them differ from more rigorous experimentation (Nowaezyk, 2008) .

 

Some items seem to lend themselves to relatively frequent experimentation; for example, it is not unusual for some functions of government to be tried in a variety of ministerial portfolios before they settle down to their “control” location.

 

In the words of Osuala (1993), observation method refers to the selection, recording and encoding of that set of behaviours and settings concerning organisms with empirical aims.  Osuala (1993) explaining further sees selection as the emphasis on scientific observers.  Provocation recognizes the relationship between experimental intervention and observational method.  Recording refers to the recording of events through the use of field notes or any other means while encoding refers to the process of simplifying through some data reduction method while empirical aims was only emphasizing the variety of functions observational methods can serve vis-à-vis description, hypothesis or theory testing, generation or hypotheses or theory testing.

 

 Another functional area of experimentation is civil service training, and this is particularly clear in Africa.  After a long period of inactivity, and with the advent of independence, the need to train national civil servants became recognized as a matter of great urgency.  In the absence of analogue situations elsewhere and in the past, which could be copied with a reasonable degree, forms of training were tried – mixtures of residential and on-the-job tuition, new teaching techniques, new teachers and if they were not successful they were scrapped and something, or someone else was tried.  The general success of these schemes should not blind us to the fact that they were an experiment or rather a series of experiments (Obasi, 2002).

 

Research Into Experiments in Public Administration

 There is a growing literature of administrative history, based on extensive investigation and dealing with a wide range of administrative experiments.  In the word of Loftus (1998), the research project on public administration training by the Institute of Development Studies, University of Sussex, is of this type, and its enquiries into training in India, Pakistan, Kenya and Zambia underline the experimental nature of the training programmes and institutions being studied.

 


The value of this type of research according to Horden (1969), lies in the careful analysis  and evaluation of the experiment-what  went wrong and why, to what extent and why were they successful, how could the various components have been manipulated to provide a greater degree of success, can similar experiments safely be applied elsewhere?  What, in short, are the lessons to be learned from the experiments, and how can these lessons be applied elsewhere and in the future.  It follows that the more closely in time the analysis and evaluation follow the experiment, the more useful are likely to be the lessons learned.  Indeed, there is much to be said for research into experiments being carried on at the same time as the experiments are being conducted, and this is an area in which close relations between the practitioner and the academic could be valuable.

 

Experiments Having Public Administration - Implications

Certainly, the field of experiments, which have public administration implications, is very wide indeed, and these implications are of two kinds.  Broadly stated, experimentation in the natural sciences may be said to have the aim of working on the actual objects of administration; with the things which have to be dealt with administrative, such as pollution, whereas experimentation in the social sciences often relates to the administrative means and their effectiveness, such as communications and leadership (Ghai, 1999)

 

As governments and other public authorities play a greater and greater part in conducting or encouraging general research, having experimental components, Amartya(2005) opined that so will the public administration implications of experimental research increase.  The financing, planning and control of such projects are clearly part of public administration, not only because public money is being spent but also because the public has, presumably, a fairly  direct interest in the research.  For example, experimental research into the control of pollution could have quite wide public administration implications (Amartya, 2005).

 

The control of pollution on a wide scale is the responsibility of central government upon whom will be brought to bear the pressures of interest groups – health authorities, local councils, fishing industries, tourist agencies; the means of control will usually be the result of experimentation; control may well require legislation followed by a system of public inspection and the expenditure of public funds (which have to be accounted for and audited); a number of administrative units will need to be established to administer the legislation: in short, the matter will become bureaucratized, and will become an integral part of public administration. According to Smith (1976),  the bulk of files in the British Home Office and Attorney General’s office, indicates the way in which this type of experimentation has implications for public administration.

 

 Experimentation in some other fields has import public administration implications of less direct nature.  For example, the Hawthorn experiment in the 1930s and their human and social behaviour conclusions, had, or ought to have had, considerable effect on “man-management” in the public services (Bare, 2003). Again, the experiments of advertising researchers size, format, colour, frequency and location of the information being advertised – are important, not only for the departments of information and other public propaganda agencies, but for every governmental unit which advertises staff vacancies in the public press, produces and publishes an annual report, or issues posters and pamphlets about its activities.  The experiments of psychologist researchers into small group dynamics are of interest to us because they cast light on behaviour in such environments as the office, the committee room and the classroom of training establishments.  The experimental research of organization and management specialists yields a continuous flow of results, which are of direct interest to public administration. 

 

Much of this experimental research in fields closely allied to public administration, or rather the conclusions, and possible applications of the conclusion, ought to be more closely studied by administrators.

 

Concluding Remarks

The question of experimental research in public administration places us in a dilemma and as is usually the case, we must try to strike a balance between the opposing factors. On the one hand, since the raw materials and the clientele of public administration are the public, since he would be experimenting with human beings, and since he would be spending-and quite possible wasting-public money of which he is the trustee, the public administrator is naturally and quite properly reluctant to experiment in his job.

 

Yet on the other hand, this very reluctance to “play around” with the public and with public money is an argument strongly in favour   of experimental research prior to the introduction of any large-scale, complex or costly scheme.  The “pilot project”, if carefully designed and conducted in as strict scientific experimental fashion as possible, is a most useful method of ensuring so far as one is able that one’ hypothesis are valid, and that unforeseen factors, relationships and responses come to light.  Pilot projects, however, must be treated as serious experimental research, and the various stimuli and responses carefully measured and compared with controls; if they are not treated in this way then it will be impossible to demonstrate satisfactorily and honestly whether the project has been a success, and whether it should not lead to the full-scale project being contemplated.  Also, the difficulties of experimenting with human beings and of risking waste of public funds should be considered in any fruitful public administration research.

 

References

Aghayere, V.O. and Ojo, S.O.J. (1997), Research Methods in Social Sciences, Ibadan: Stirling-Horden Publishers (Nig.) Limited.

 

Amartya, S. (2005), Development: Which Way Now?  Economic Journal, No. 2 Vol. III

 

Babbie, E.R. (1995)  The Practice of Social Research, New York, Belmont  Wadsworth Publishing Co.

 

Bare, O. et al (2003), Research in Africa,  Evaston: Northwestern University  Press.

 

Black J. and Champion D. (1996), Methods and Issues in Social Science Research, New York, John Wiley

.

Brown, S. R. (1980) Political Subjectivity Application of Methodology in Political  Science, New Haven, Yale University Press.

 

 

Cramer, D.(2004), Introducing Statistics for Social Science Research, London,  Routledge

 

Casley, D.J. and Lury D.A. (1990), Data Collection in Developing Countries  Oxford, Clarendon Press,

 

Dexter, A. (2004), Elite and Specialized Interviewing,  Evanston, Northwestern University Press.

 

Frund, J.E. (1979), Modern Elementary Statistics, New Jersey, Prentice-Hall.

 

Ghai, D. (1999) Participatory Development: Some Perspectives from Grass-roots  Experience. UNRISD Discussion Paper, 5th June.

Harson D.G. (2000), Handbook of Political Science Method, Boston, Holbrook  Press Inc

.

Horden, R. (1969), Interviewing: Strategy, Techniques and Tactics, Homewood, IIIinois, Dorsey.

 

Kerlinger, F.N. (1973), Foundations of Behavioural Research, New York, Holt Press.

 

North R.C. et al (1973),  Content Analysis, Evanston: Northwester University  Press.

 

Smith B.L. et al (1976), Politcal Research Methods: Foundations and Techniques Houghton Mifflin.

Uhunmwuangho S.O. (2007), Some Methodological Insight  into Social Science Research, Nigerian Journal of Citizenship Education Vol. 5 No. 2

Tufte, E.R. (2004), Data Analysis for Politics and Public Policy, New Jersey, Prentice-Hall