This belongs to the revision of social research strategies, I am going to summarise the key differences between inductive and deductive research approaches – but first what they’ve got in common. Both strategies are rooted in a positivist assumption in terms of epistemology and ontology. The underlying empiricism, i.e. the notion that only knowledge gained through experiences and senses is acceptable, is implemented by rigorous testing. Enlarging the number of instances observed (samples) increases plausibility and the number of regularities being identified. The accumulated ‘facts’ provide basis for general laws of cause and effects. Those are depicted in models as dependent (predictor) and independent (outcome) variables.
Inductive theory is being derived from the observations made. This approach cannot test hypotheses but generates them. In contrast, deduction is theory-driven, it’s based on preconceptions and aims to overcome the limitations of induction. It puts theories to the test, that means hypotheses can be falsified and disproved. The aim is to move closer to the truth, hence the gradual elimination of false theories implies that theories tested and not disproved can only be considered provisional.
Ideally, a deductive approach starts with a theoretical framework (for instance based on Erving Goffman’s ‘stigma’ or Pierre Bourdieu’s ‘social capital’) and the formulation of hypotheses. Usually, this includes an alternative hypothesis (also called experimental H., which states the effect assumed) and the null hypothesis (which states the effect is absent). What follows is the data collection which delivers findings that either result in confirmation or rejection of the null hypothesis and a subsequent revision of the theory.
In practice, though, deduction often entails an element of induction and vice versa. This is rooted in theoretical reflection once the data has been collected or the desire to establish conditions which allow the theory to hold (or not). This continuous weaving back and forth between data and theory and is called an iterative strategy, particularly evident in qualitative research which takes a grounded theory approach and a way to add to the validity of research. In quantitative research, it is advisable to carefully distinguish between the more complex development of theory and the generalisation of empirical findings.
Recording an Audio Podcast mp3 with Audacity
This is a very useful and clearly explained tutorial. I had downloaded the software some time ago but had not yet managed to look into some of the aspects I felt I need to improve. Towards the end of the year, I will be required to produce podcasts for the H808/eProfessional course. So, I’ve just been updating the course-related (password secured, I know…what’s the point) Wiki and will get a headset as soon as the Royal Mail decide they have been striking enough for this summer…
Audacity is free open-source software for editing sounds/producing mp3 files that works on various platform, quick to download.
With all the studies going on, I don’t want to repeat my errors of the past (too focused in one area, lack of stimulation and progress in another). It’s all about getting the balance right – having a few slots per week for regular exercise and the odd special keep me going. I started spinning classes in February this year, after years of rather dull gym routines, that pushed me to new limits – and insights. I was surprised and indulged in these new skills. Luckily, I am also enjoying an instructor who loves his job and never gets tired challenging us. I tried climbing and bouldering recently, after coming back from California where the outdoor-lifestyle made me think about the life indoors in less sunny and less spacious London. The teamwork in abseiling practice, the mental challenge, the fear in a controlled drop – it all adds to the sets of knowledge bites which are not formally recognised. Which, I believe, are undervalued – or, taken for granted, depending on the situational context – until a situation comes up that proves that a whole range of skills are not given but acquired.
So, ahead is a period of 10 very densely packed months with the social research methods and skills exam in October and a total of 12 papers (amounting to a daunting 40,000 words) plus numerous pieces of assessed contributions to online collaboration (it’s no longer fun and play only) for the MAODE – here is what I hope to try at some point, let’s see what I will be able to try and realise:
- Kayaking / bungee jumping –www.exelement.co.uk
- Zorbing – www.zorbing.co.uk
- Outdoor rock climbing – www.exhilaration.co.uk
- Indoor skydiving – www.airkix.com
- Microlight flying – www.microlightflyingschool.co.uk
- Ice climbing – www.verticalchill.com
- Blokarting – www.cornwallblokartcentre.co.uk
- White water rafting – www.ukrafting.co.uk
- Coasteering – www.tyf.com
As mentioned before in my post on Research Methods and Skills here is a review of the book by Andy Field that has proved to be most helpful in my current post-graduate statistics course:
FIELD, A (3rd ed.) (2009) Discovering Statistics using SPSS. London: Sage.
For this 820 pages oeuvre there is a companion available with a number of student resources such as multiple choice questions and a flash card gloassary. Field’s companion on Sage
I got the edition which includes a 13 months student licence for SPSS Version 17.0.
Here are a number of reviews on Amazon . It is the most user-friendly, smart-structured, and accessible as well as entertaining Statistics book I have come across. If you are a busy student with more than a commitment to studies, try this. Field is doing a fantastic job in providing an all-you-need volume which does not step into the trap so many other authors seem to be unable to avoid. All those who believe they can fragment statistics and provide either the maths only, or the SPSS only or some statistics chunk food that leaves you unsatisfied as you still don’t understand how to apply findings to a case other than the model discussed.
Field’s book provides a 16-pages glossary, 7-pages references, index plus an appendix which contains the following:
- Table of the standard normal distribution
- Critical values of the t-distribution
- Critical values of the F-distribution
- Critical values of the chi-square distribution
There is a separate chapter about SPSS, the environment, the viewer, the SmartViewer, the syntax and more. A list of mathematical operators, Greek symbols and English symbols comes in very handy, so does the brief maths revision
Each chapter highlights at the end the important terms which is very useful for revision. There are self-tests, references for further reading and interesting real research as well as areas which explain either ‘strange dialogue boxes’ (in SPSS and how to make sense of them) or concepts (such as degree of freedom).
The chapters are structured in a clear manner, the language is clear and terms are explained throughout so that you won’t have to flip nervously through several books at the same time and do the work a smart author and editing team would have done for you. Formulae and tables produced in SPSS are displayed in a logical manner, for instance the dialog box to be selected in SPSS is followed by a scatterplot which in turn is illustrated by SPSS outputs. The latter are also explained in detail so you know what they actually mean, how to write them up in a conventional way, how to analyse them and how to interpret the outcome. Key terms have been printed in red and the SPSS dialogue boxes also come in colour.
Recoding, for instance, is explained for those using the recode function in SPSS but also for those who do a lot of recoding, there is the syntax and a related file on the CD. There is a number of data sets available to play with, Field has chosen to provide areas such as the impact of Viagra on a person’s libido (getting those on board who are tired of jobs in postoffice.sav or the little inspiring government statistics on traffic) in order to explain ANOVA (analysis of variance).
Field’s Statistics Hell is also very useful and offers:
- Lectures on frequency distributions
- Handouts: SPSS: t-test, frequency distributions and correlation
It’s a fortnight ago I attended the Workshop and the analyses of ethical dilemmas in a range of different fields explored with colleagues gained another dimension during a conversation I had last week with a friend located in continental Europe – who’s neither related to research nor sociology or internet studies in the wider sense.
I notice that a universal notion of ethics, online collaboration, self/community, individual authorship and Creative Commons equals an assumption we all see the same sky, every day. We don’t. And this is not only due to geographical location, national politics and regulation, but also related to varying degrees and facets of collective unconsciousness. The kind of public debates I do have access to here in the UK by help of traditional media including print media differ considerably from German debates (so do US debates I access online). Different types of angst feed into such discourses on macro and micro levels. Only by seeking actively to push my personal boundaries and engaging in a challenge of my own ideas as well as questioning what is taken for granted by others, something of a more personalised value system, based on eclecticism has emerged. This is a mix of nationally framed legal regulations, enhanced by ethical guidelines compiled by academic and professional bodies, plus a range of personal, in part moral, beliefs.
The questions I have in mind are:
- Are others similarly aware of their values and beliefs and their origin?
- Are they subscribing to a notion of values in flux or rather static, life-long held beliefs when it comes to moral values and ethics, in particular in the globalised virtual sphere?
- Where does awareness and reflexivity come from if not formally acquired, and what role do social media play in this? Is it undermining, challenging or enhancing ‘everyday ethics’?
Clashes and opportunities are produced in social networks which offer discussions in forums and groups. Large and heterogeneous groups of individuals engage in debates and become exposed to ideas, behaviours and practices they are less likely to encounter in real life in such a speedy, diverse, and dynamic manner. I recalled my own experiences and reviewed my impressions, wondering whether research can be improved in its ethical quality if more consideration would be given to the following aspects:
- Communication skills and awareness levels are culturally embedded, they are often taken for granted and subject to assumptions rather than being explicitly discussed and reflected upon – if researchers take a reflexive approach why not offering research participants the chance to engage in a collective exercise of reflexivity too?
- The digital divide 2.0: social media super-users vs social media sceptics – are social media super-users ethically more aware as they are more likely to be exposed to a wider range of positive as well as ethically problematic behaviours?
- How do adult research participants learn about ethical issues? Informal learning processes (which can be an incentive for research participants as well as researchers), crowdsourcing practices and non-target driven engagement in social network sites may result in a stronger sense of authorship and a willingness to challenge practices of production of authoritative knowledge in the researchers’ world. Yet, this may be rather an exception than the norm. Would researchers and societies benefit from a more pro-active approach on the part of researchers, for instance by including such debates into research projects and making them part of the data collection?
- Not just Twitter but also Facebook is one such major site that potentially may help to increase attribution awareness. However, as attributing practices, for instance on Twitter, evolve rapidly but haven’t stabilised yet, we cannot assume users will adjust and adopt naturally the most ethically beneficial syntax at some point. Flickr for instance offers currently 4 explicit options under the Creative Commons Licence – plus the option to not licence images and videos but make them freely available for all purposes. The advise is provided in clear language and many users may develop an awareness for authorship and copyrights, however, others may not even bother about finding out the differences between options.
What is supposed to be right or how things should be done online differs widely, conventions are emerging and are being challenged on an ongoing basis. The amount of trust gained over time by help of familiarisation with Social Network Sites and Social Bookmarking Sites as well as expertise in online commenting, eloquence and online ‘street wisdom’ separates social media savvy users from those who rather stick to e-mail and the consultation of conventional websites. This distinction applies also to researchers and academics. Awareness-building and reflexivity as well as ethical considerations should accompany the entire research process, from drafting to publishing and beyond, when participants critique the findings and interpretations. The learning could and should be mutual, without fearing the researcher’s expertise and specialist position is under threat, although it might well be under scrutiny due to the increased level of transparency. That may well be a very optimistic stance, yet, a paradigm shift towards collaboration in a partner-like manner could be beneficial and much more sustainable on the long-term and it could help to educate where institutionalised learning fails to reach out.
The key discussion points and questions raised at the workshop have been posted by Anne Beaulieu at the Virtual Knowledge Studio as FAQs which underlines the fact that ethics in (e)research is not only ongoing and iterative but also a process rather than a stage at some point of a research which means, frequently asked questions may require new answers, each time we encounter the dilemma.