In my last post
I talked about some of the unique considerations social media brings to governance generally and policies in particular. So let me start with the point that an organization may not need a social media policy at all. This sounds counterintuitive in light of that post and given that the new Social Media Governance Practitioner course has several modules on the policy. However, the organization may already have a well-written policy that addresses the types of things needed in a channel-neutral fashion such as in a communications policy or internet policy. In other words, the organization certainly should not write a custom policy for every tool it uses - that'd be all the governance staff did! Rather, the policy should be broadly applicable regardless of whether content is posted through Facebook, Yammer, or even email or instant messaging.
Assuming that those policies either don't exist or do not adequately cover the organization's requirements, the team will need to address them. In brief, an effective social media policy should address the following areas:
Inappropriate usage. This is probably the easiest because the organization likely already has such a policy in place for email or more broadly.
Appropriate usage. This will likely apply only to official accounts and would address things like use of the organization's logo, requirements for headshots on some or all staff accounts, etc.
Ownership. This is a sticky issue right now and the case law is very fluid. There is a tension between the organization's expectations of ownership of content created by its employees on its time and at its behest, on the one hand, and the commercial services' Terms of Service, most of which explicitly assign ownership to the individual content creator on the other. Non-disclosure and employment agreements can address this to some extent but as of today there really is no "right answer" for who owns an employee's Twitter content or LinkedIn contacts.
Third-party content. If this is allowed, such as video uploads or blog comments, any restrictions should be spelled out. These will often relate to the inappropriate usage policy as noted above but there could also be issues with third parties posting sensitive content that might need to be reviewed, blocked, or deleted.
Personal usage. Prohibiting this in the age of BYOD and ubiquitous broadband internet is patently unrealistic - not only has the horse left the barn, but the barn's been bulldozed, paved, and had a building built on it! If employees are going to waste time, blocking Facebook won't address the larger issue. However, what does need to be addressed is to remind employees that personal social media usage should not interfere with their work or that of their colleagues.
Representation and affiliation. In some organizations only authorized people may speak on behalf of the organization - everyone else is required to forego even mentioning the organization by name. In others it's fair game - and there is growing research that suggests the latter makes an organization more approachable and trusted. At a minimum the policy should require that those speaking on behalf of the organization make their affiliation clear: no anonymous accounts, no fake accounts or accounts created under false pretenses. This should be prominently posted on the service somewhere like the about or bio and should be disclosed whenever there could be an appearance of conflict.
At the same time it may be helpful to have employees speaking their own mind disclose both their affiliation and disclaim representation of the organization. This doesn't always work - the president of the company probably cannot expect to separate her opinions from her role - but for most staff it's often appropriate to include something like "views expressed are my own and do not represent the positions of my organization or its staff."
Ethics. If the organization has a code of ethics it is certainly appropriate to refer to it in the social media policy.
Protection of information. The policy should remind employees not to disclose any sensitive information whether it relates to intellectual property, personal information, financial or trade secret information, or anything else of that nature. The Internet doesn't forget and posting that type of information can result in significant liability for the organization. The rule of thumb is, if it shouldn't be printed on the front page of the Wall Street Journal, it shouldn't be tweeted/posted/etc.
Pre- and post-publication review and monitoring. As noted above and in my previous posts in most instances it simply doesn't make sense to require approval of each Facebook Like or tweet. Where this is required, of course, such as for financial services in some cases in the US, that's different. But as a rule organizations should trust their employees and provide them the guidance and training required to allow them to be trustworthy. Post-publication monitoring on the other hand is often a good thing for any number of reasons including sentiment analysis and brand awareness. But again the organization should be clear about what it is doing and why and disclose this when appropriate.
Accountability. The policy should remind employees that they are accountable for the things they post and that there could be consequences for both individual employees and the organization at large if the policy is not followed.
Recordkeeping requirements. Facebook is not a viable platform for records management. There are ways to manage social content as records with varying degrees of effectiveness; if this is a requirement the policy should state that. In the absence of ability to manage social content effectively in this context, the policy might instead state that transactions should be completed in a more appropriate channel such as email.
Tool-specific considerations. Where these exist, the policy should address them as necessary. For example, the policy might prohibit the use of Facebook or LinkedIn messaging or Twitter direct messages because they are more difficult to manage.
Sector-specific considerations. Many highly regulated industries have detailed requirements for privacy, security, recordkeeping, and the like. Where appropriate these should be included or referred to.
Two last points. First, there are a significant number of social media policies available on the web that organizations can use as a starting point. A search for "social media policy" will return hundreds of them, many of them quite good. Many of these also include guidelines as well; my preference is to keep policy and guidelines separate because the latter will likely require more frequent updates than the former as technologies evolve.
Second, no policy can cover everything. It's important to put the governance scaffolding in place and train employees on expectations, but organizations must also realize that it is extraordinarily difficult to apply centralized and comprehensive controls to social media because there are so many different channels and they evolve so quickly. By far the better approach is to combine the policy as outlined here with training and inculcation of the organization's values into the employees.