Saturday, January 29, 2011

Problem when upgrading Debian from stable (lenny) to testing (squeeze)

I just got a VPS for private use, and wanted to upgrade it to Debian testing (squeeze) in order to have newer packages. The actual version was Debian stable (lenny).

What I did is simply :

  1. edit /etc/apt/sources.list and replace lenny with squeeze
  2. run apt-get update
  3. run apt-get dist-upgrade

This is AFAIK the standard way of upgrading a machine. However, I got the following error :

Selecting previously deselected package insserv.
dpkg: considering deconfiguration of sysv-rc, which would be broken by installation of insserv ...
dpkg: yes, will deconfigure sysv-rc (broken by insserv).
(Reading database ... 37095 files and directories currently installed.)
Unpacking insserv (from .../insserv_1.12.0-14_i386.deb) ...
De-configuring sysv-rc ...
Setting up insserv (1.12.0-14) ...
(Reading database ... 37124 files and directories currently installed.)
Preparing to replace sysv-rc 2.86.ds1-61 (using .../sysv-rc_2.87dsf-8_all.deb) ...
touch: setting times of `/etc/init.d/.legacy-bootordering': Bad address
dpkg: error processing /var/cache/apt/archives/sysv-rc_2.87dsf-8_all.deb (--unpack):
 subprocess new pre-installation script returned error exit status 1
Errors were encountered while processing:
 /var/cache/apt/archives/sysv-rc_2.87dsf-8_all.deb
E: Sub-process /usr/bin/dpkg returned an error code (1)

When trying to install sysv-rc itself, I get the following error. The file in question (/etc/init.d/.legacy-bootordering) exists, but is empty.

Has anybody an idea of what causes this error, and how I could solve it ?

  • The standard upgrade instructions usually include an basic upgrade (upgrade or safe-upgrade) before the dist-upgrade step. You should give the upgrade instructions in the release notes a read. The notes for Sqeeze do not appear to be available yet, unsurprisingly, but those for Lenny should be a good start: http://www.debian.org/releases/stable/i386/release-notes/ch-upgrading.en.html

    Wookai : As I can re-install the VPS to its original state, I tried to first do a dist-upgrade on lenny to have all packages up to date before changing to squeeze. But I get the same error...
    Wookai : It seems that Debian testing is currently broken, according to a moderator on Debian.net. Let's wait for a few days before trying again.
    womble : `update` doesn't upgrade any packages.
    David Spillett : Sorry, that should have been upgrade - I'll correct the reply.
  • The problem comes from a preinst script (a package maintainer script that is run before the installation of the package). It is likely to be '/var/lib/dpkg/info/sysv-rc.preinst'

    touch fails to set the modification date of /etc/init.d/.legacy-bootordering.

    Try to 'touch' any file yourself. Try to delete it and rerun it.

    Did apt upgrade the glibc before sysv-rc ? There are many bugreport concerning that problem, not only on Debian.

    Look for "touch bad address" on search engines...

    I believe this is something related to both (kernel + libc)

    Wookai : Yes, I found out about the "touch bad address" problem later. Looks like we have to wait until the bug is fixed... Thanks for the input though !
    From zecrazytux

Centralized sudo sudoers file?

I am the admin of several different servers and currently there is a different sudoers file on each one. This is getting slightly out of hand as quite often I need to give someone permissions to do something with sudo but it only gets done on one server. Is there an easy way of editing the sudoers file just on my central server and then distributing it by SFTP or something like that to the other servers in an easy way?

Mostly wondering how other sysadmins solve this problem, since the sudoers file doesn't seem to be remotely accessible with NIS, for example.

Operating system is SUSE Linux Enterprise Server 11 64-bit, but it shouldn't matter.

EDIT: Every machine will, for now, have the same sudoers file.

EDIT2: The accepted answer's comment was the closest to what I actually went ahead and did. I am right now using an SVN-supported puppet-installation and after a few headaches, it's working very well.

  • Step 1. Setup an ldap server and configure all your machines to authenticate users and groups via ldap

    Step 2. Create a master sudoers group in ldap, say yourcompany-sudoers. Give that group permission to sudo (with password) in the /etc/sudoers file on each machine.

    Step 3. Create a sudoers-machinename group in ldap, add that group to /etc/sudoers on the corresponding machine.

    With those three steps you don't need to edit the /etc/sudoers file after the machine is installed and you get a number of other benefits as well

    For extra effect

    Step 4. Setup puppet, cfengine, chef or similar, and deploy a templated sudoers file to each machine automatically.

    wzzrd : I like the puppet part, but as far as the multiple sudoers files stuff is concerned: I'd really stay away from that as far as possible, as it will turn in to a maintenance nightmare real fast. I would suggest creating *one* sudoers file with different options for different machines with the Host_Alias directive, as I suggested below.
    Stefan Thyberg : This is a very good idea, unfortunately I don't want people to be able to sudo just anything at all, only very specific commands. Also, assume that every machine will have the same sudoers and the same commands allowed.
    asdmin : it's a complete overkill to sudo-ldap... for god's sake, sudo _can_ work directly from ldap
    Stefan Thyberg : I didn't do this solution since it would require me to install a new sudo, compiled with ldap-support, for each machine.
    Dave Cheney : Most UNIX's that have pam support can support ldap as an authentication source, so you just need to compile with pam support
  • Alternatively, you could look into using version control (say git or mercurial) for some of your configuration files in /etc, put the sudoers file under said VCS, then have each machine pull its copy of the common configuration files from the repository.

    Stefan Thyberg : It's a good idea, but unfortunately a time gap is usually unacceptable for the change, it needs to be done as soon as I say the change has been made. I assume you meant to have a cron job running update from the VCS. This also adds some extra problems since the VCS is outside my "realm" of administration and I don't want anyone with access to the repository being able to change this file.
    Ophidian : I believe you could set up check-in event hooks such that the central repo would push changes to other machines, but that would start getting cumbersome as it needed to scale. With a DVCS like Mercurial, I would assume you would have your own repo set up for your administrative work separate from whatever is maintained for development. That way you could host it and lock it down as needed for administrative purposes. The versioning is really just an added bonus.
    Stefan Thyberg : Maybe I will look into this if LDAP-sudo does not pan out.
    From Ophidian
  • /etc/sudoers can also be replaced with calls to a centralized LDAP server directly. All of the permissions and settings you would usually set on the local machine get set in LDAP.

    http://www.gratisoft.us/sudo/man/sudoers.ldap.html

    Mark Farver

    Stefan Thyberg : This is looking like my favorite answer so far, I will be trying this out with one machine to see if it works well.
    From mfarver
  • The absolute last thing I would want to do, is create a separate sudoers file, like Dave suggests. If you have a lot of machines, and only subtle differences apply (as is often the case), you really do not want this. It will generate a lot of overhead.

    What you really want to do, is create one sudoers file. In that sudoers file, you can then define Host_Aliases for groups of systems for which you want a certain policy to apply. You can also make User_Aliases and whatnot. Done right, this gives you a huge benefit by having one file to edit, so it is easy to see what applies where and you don't have to worry about different versions of the sudoers file being deployed on different machines by accident.

    New versions of sudo even support the sudoers.d directory in /etc, which might be of help too, but I haven't tried that yet.

    Stefan Thyberg : Assume that, for now, I want the exact same policy for every machine.
    mfarver : Then the first suggestion to use Puppet is the way to go. This looks like a good getting started guide: http://www.lindstromconsulting.com/node/2 Puppet is incredibly useful, and completely worth the time it takes to figure out and setup.
    Stefan Thyberg : I used this solution with puppet in conjunction with SVN and it works very well so far.
    From wzzrd

Roughly How much is the yearly salary of System administrator ?

Just for my own knowledge i wanted to know how much salary does an sytem administrator gets. and with experience what maximum salaray is possible with that field. I know its wide question but just wanted to get a idea from persons who are working with servers (2003 or 2008) or as sys admins

EDIT: To be Specific I mean for country like Australia. I can't think of any more parameter so that i cam amke it more specific. IF someone can make it more specific i will be thankful. But you should get it what i want to ask

  • Anywhere between "bugger all" and "not nearly enough to be dealing with this".

    Master : I live in Australia and i have read you also work in sydney . I you don't mind how much u get :)
    womble : "If I don't mind"? You're kidding, right?
    womble : OK, this has just gone from weird to creepy.
    Master : Again the same story , why r u after me only. Let me explore serverfault properly. You senior fellows are taking the extra advantage of your reputation
    rodjek : Just a little, yeah...
    womble : Senior fellows get paid a lot more than me.
    Master : Is there any pre-approval page where i can pre-approve my questions before asking?
    John Gardeniers : Not really but you might consider reading the FAQ before posting again.
    squillman : Hmm, yeah. Officially has now earned the spam flag.
    Master : I have read that questions should not be subjective. Is there any other place where i can find techs like here so that i can discuss these type of questions without getting -ve votes
    womble : Hie thee to http://meta.stackoverflow.com/
    Master : Can i expect u over there as well?
    From womble
  • How long is a piece of string? Such a question can only be answered within specific parameters. My salary cannot be compared to someone else's, doing identical work, in another part of the state, let alone the country or the world. There are far too many variables and no answer can possibly be correct (except the one Womble just posted).

    womble : Yay! I got the correct answer!
  • No matter how much I'm making, it's 10% less than I'd like to be ;)

    Seriously, there can be no reasonable answer to such an open-ended question.

    From phoebus
  • Salaries can vary widely by region. You should check monster.com and other job-hunting sites for salary comparisons in your region.

    Monster.com's salary checker also assigns different job titles based on job descriptions and numbers of years of experience.

    From rob
  • Try Salary.com. Enter the job type and a location and you'll get a cute little distribution graph.

    womble : That site appears to be US-only, and the OP already said it was for Australia (well, "country like Australia", but I doubt the US counts despite our many linguistic similarities)
    Boden : Oops, sorry! Didn't read close enough.
    Master : But thanks Boden , you did try to help . This is what matters
    From Boden

Limit services hosted by one svchost under WinXP?

Is there a registry setting or something to limit the number of individual services that are run within a single svchost process?

I'm aware of the WIN32_SHARE_PROCESS flag and the sc app's ability to make individual services run in their own process, but I don't want the overhead of a process for each of the dozens of services. Ideally I would like to see the 30 services from Automatic Updates through Workstation that are currently hosted by a single process be shared among 3 to 5 processes.

  • You can control which services get bundled together into a single svchost process by modifying the Registry entries at HKLM/SOFTWARE/Microsoft/Windows NT/CurrentVersion/SvcHost. You'll be modifying the values, each of which contains a list of the services that run within it (e.g. netsvcs).

    There's a bit more information about this out there, much of it related to the Conficker worm and cleaning up after it. This is based on research related to a situation where I had to manually clean out traces of some malware services.

    Update with additional resources/information: There's not a lot of information out there and I haven't experimented with this yet, but the most useful information I found earlier while researching was:

    Tim Sylvester : I saw a reference to that path in a MS KB article (#314056), but I got the impression that I can't just change those values. For example, the `netsvcs` group has server, workstation, etc., in it, and if you look at the service descriptors for those services in `System\CurrentControlSet\Services`, they reference `netsvcs` in their `ImagePath` setting. Is it safe to re-arrange the groups as long as I update the `ImagePath` for all the services that are moved?
    fencepost : Post revised with additional resources including ones with step-by-step of changes needed.
    Tim Sylvester : That first link is just what I was looking for. Thanks!
    From fencepost
  • You won't notice any improvement in performance by doing this...

    If you want to start some services on demand, set them to Manual start and Windows will start the service when it is trying to be accessed.

    If you want visibility into what service is causing a performance issue try launching a command prompt and using this command: C:\>tasklist /svc /FI "IMAGENAME eq svchost.exe"

    That will show you all svchost.exe instances, which services are running in each process, and the PID (process ID).

    Once you know the PID you can launch Task Manager (Ctrl+Alt+Del) and in the Processes tab go to View > Select Columns... and chose to show PIDs.

    Then you'll know which svchost.exe is causing your performance issue, and you can cross reference the PID to your list of services running in that instance of svchost.exe.

    Hope this helps address the underlying reason for your question.

    Tim Sylvester : I'm aware that running more hosts will actually *reduce* performance slightly since those services can no longer share resources. I want to be able to dynamically reduce the priority of processes that are using a lot of CPU without affecting nearly all the services at once, but I was also trying to find a balance between that and having every service use its own process. For a one-time analysis I would use the script from http://serverfault.com/questions/12278 to set them all to "own" and set them all back to shared when I found the problem, but I'm looking for something more dynamic.
    Garrett : SC Config Type=own You can set them all to own, find your culprits, and then set everything back to share, except the culprits?
    From Garrett

How can I tell a portable drive to ignore bad sectors on a Linux?

I have a USB harddrive, and it's old. It's fat32 formatted. It's so old that parts of it are failing. When I tell it to read or write from a certain parts, I get IO errors on my console (I'm using Ubuntu 9.10).

Is there some programme that I can run that'll scan my drive for bad parts, then 'remove' them? I'm willing for this to cost me a few GB in size (it's a 160GB drive). There's nothing on the drive that I care about, it was recently reformatted. It's currently formatted fat32, but it'll only be plugged into linux machines, so I'm willing to try ext3, or some other linux filesystem. This drive has been reformatted recently and the same thing is happening.

I know the real solution is to get a new drive, and one is on order. However I need to give a harddrive to someone in the new few days, and this (partially broken) one is the only spare. If I get get this working, that'd be great. Is there some way I can reformat or repartition this drive so I have at least some usable drive space?

  • While this question probably belongs on super user.

    Identifying the bad blocks isn't hard to do. You can use the program badblocks to do it. Getting it to ignore them is a tougher matter. If a drive is showing bad blocks then it means that the drive is out of spare blocks. It should probably just be trashed and I wouldn't pass that drive on to someone else.

    If you must use this drive there is a chance that spinrite will get the drive back to a healthy state...but only if the drive has mistakenly marked some blocks as bad. Spinrite will check all the blocks on the drive including the ones marked as bad and if it determines that a block that the drive marked as bad is really good it will return it back to the usable blocks freeing up a block in the spare block list. If it finds enough of these the drive will show 0 bad blocks when you run badblocks and hopefully have some spare blocks to spare.

    I had a laptop that had 120 badblocks and this dropped to 0 after running spinrite on the drive. Drive continues to work without any problems today 2 years later.

    Boden : I can second the use of SpinRite
    Rory McCann : To be clear, I don't care about the data on it. I can reformat it.
    3dinfluence : You indicate that the drive was recently formatted. If that's the case and it's still showing problems then the problem is at the physical drive level...not the file system. A healthy drive should never show any blocks as bad. This is b/c drives keep a pool of spare blocks that they remap to when they encounter a bad block. So reformatting it doesn't change the fact that the hard drive physically has run out of spare blocks and is now is out of spare blocks and has additional bad blocks that it can't hide.
    3dinfluence : So here's what you need to do. First run `badblocks \dev\` if that outputs any bad sectors then do this `sudo apt-get install smartmontools` then run `smartctl -A /dev/`. This should give you the over all health of the drive. Looking at the Reallocated_Sector_Ct in particular b/c I'm going to guess that this value is pretty high on this drive. This will confirm that your drive is remapping a lot of bad blocks. Otherwise you just have some file system then so reformat it to NTFS if you're using it with Windows and don't pick Quick Format.
    3dinfluence : My personal belief is that drives are so cheap these days that they aren't worth the time and effort involved in doing all this. Especially if the data on the drive is not important. Just replace it.
    Rory McCann : Yes there are almost certainly physical problems with the disk. Is there some way I can reformat/repartition it so that I can skip the bad parts? I'm trying to find a way to get some sort of usable storage space on it, even for a few weeks.
    3dinfluence : Spinrite and a bit of luck may do the trick. If all the badblocks are clustered together you may be able to partition the drive to avoid the current bad blocks. However any new bad block will result in an error that the drive can not deal with and you'll find yourself in the same or worst situation if the drive has data which is important.

Exchange IMAP4 connector - Error Event ID 2006

Hi,

A couple of users in my organisation use IMAP4 to connect to Exchange 2007 (Update rollup 9 applied) because they prefer Thunderbird / Postbox clients. One of the users is generating errors in the Application Log as follows:

An exception Microsoft.Exchange.Data.Storage.ConversionFailedException occurred while converting message Imap4Message 1523, user "*******", folder *********, subject: "******", date: "*******" into MIME format. Microsoft.Exchange.Data.Storage.ConversionFailedException: Message content has become corrupted. ---> System.ArgumentException: Value should be a valid content type in the form 'token/token'
Parameter name: value
   at Microsoft.Exchange.Data.Mime.ContentTypeHeader.set_Value(String value)
   at Microsoft.Exchange.Data.Storage.MimeStreamWriter.WriteHeader(HeaderId type, String data)
   at Microsoft.Exchange.Data.Storage.ItemToMimeConverter.WriteMimeStreamAttachment(StreamAttachmentBase attachment, MimeFlags flags)
   --- End of inner exception stack trace ---
   at Microsoft.Exchange.Data.Storage.ItemToMimeConverter.WriteMimeStreamAttachment(StreamAttachmentBase attachment, MimeFlags flags)
   at Microsoft.Exchange.Data.Storage.ItemToMimeConverter.WriteMimeAttachment(MimePartInfo part, MimeFlags flags)
   at Microsoft.Exchange.Data.Storage.ItemToMimeConverter.WriteMimePart(MimePartInfo part, MimeFlags mimeFlags)
   at Microsoft.Exchange.Data.Storage.ItemToMimeConverter.WriteMimeParts(List`1 parts, MimeFlags mimeFlags)
   at Microsoft.Exchange.Data.Storage.ItemToMimeConverter.WriteMimePart(MimePartInfo part, MimeFlags mimeFlags)
   at Microsoft.Exchange.Data.Storage.ImapItemConverter.<>c__DisplayClass2.<WriteMimePart>b__0()
   at Microsoft.Exchange.Data.Storage.ConvertUtils.CallCts(Trace tracer, String methodName, String exceptionString, CtsCall ctsCall)
   at Microsoft.Exchange.Data.Storage.ImapItemConverter.WriteMimePart(ItemToMimeConverter converter, MimeStreamWriter writer, OutboundConversionOptions options, MimePartInfo partInfo, MimeFlags conversionFlags)
   at Microsoft.Exchange.Data.Storage.ImapItemConverter.GetBody(Stream outStream)
   at Microsoft.Exchange.Data.Storage.ImapItemConverter.GetBody(Stream outStream, UInt32[] indices)

From my reading around it seems that the suggestion is to ask users to log in to Outlook / OWA and view the messages there. However, having logged in as the users myself, the messages cannot be found either through searching or by browsing the folder detailed in the log entry. The server returns the following error to the client:

"The message could not be retrieved using the IMAP4 protocol. The message has not been deleted and may be accessible using either Microsoft Outlook or Microsoft Office Outlook Web Access. You can also try contacting the original sender of the message to find out about the contents of the message.

Retrieval of this message will be retried when the server is updated with a fix that addresses the problem."

Messages were transferred in to Exchange by copying them from the old Apple Xserve, accessed using IMAP.

So my question, finally:
1. Is there any way to get the IMAP Exchange connector to rebuild its cache of messages since it doesn't seem to be pulling them directly from the MAPI store?
2. Alternatively, if there is no database, any ideas on why these messages don't appear in Outlook or OWA would be gratefully received.

Many thanks,

Mike

  • The Exchange 2007 IMAP4 server pulls messages directly from the Information Store database. There is no "cache" of messages.

    I have no explanation re: why the messages aren't showing up in OWA or Outlook.

    I believe the issue you're seeing is the one reported here, which the Microsoft poster proports will be fixed by Exchange 2007 Service Pack 2. I'd have a look at migrating to SP2 and seeing if that resolves the issue.

    MikeB : Thanks Evan, useful to know that there's no cache as such - rules that one out. Looks like the link could be the issue, I'd like to see if it's possible to somehow get to the messages to get rid of them before applying the upgrade though. I'm confused as to why the IMAP connector can see them but MAPI can't...
  • It may just be that that particular message is corrupt.

    You could make a telnet connection to the POP3 port on the server, log in to the users mailbox and issue the LIST and UIDL commands and look for the message number in both listings. If you don't see it in both listings then there's a problem with the mailbox and\or message. Try deleting the message in question and see if that resolves the issue.


    The formatting in my comment didn't come out right (Duh). Here's what I was trying to get across as far as the telnet commands are concerned:

    1. telnet servername 110 (whatever port number that POP is running on)
    2. user username
    3. pass password
    4. list
    5. uidl
    6. dele message_number
    7. quit
    MikeB : Thanks for the quick response joeqwerty. Unfortunately every time I try to connect to the POP3 port using Telnet it closes the connection straight away. POP3 access is enabled under Client Access and for the user. Anything else that I could be missing here?
    joeqwerty : Not knowing your environment (firewalls and such) I would recommend running telnet directly on the server. Make sure to check what port POP is configured to use on the server and connect to that port: telnet server 110 user username pass password list uidl dele message_number
    joeqwerty : See the edit on my answer for a response to your comment. Sorry for any confusion.
    MikeB : Finally connected. Looks like the message doesn't exist: RETR 1523 -ERR The specified message is out of range. Which makes me wonder how the IMAP connector is finding it in the store at all... unless Ive mis-understood RETR or the error reference of course.
    joeqwerty : There's a disconnect between what the IMAP client and server thinks is there. When you Pop'ed to the server and issued the LIST and UIDL commands, did message number 1523 not show up? If not, then I would recommend deleting the Outlook profile and creating a new one. Sometimes the "agreement" between the IMAP\POP client and the server gets out of whack and they no longer "agree" on which messages the client has or doesn't have (message state). If message number 1523 did show up in the LIST and\or UIDL commands then I would recommend trying to delete it from the server using the dele command.
    From joeqwerty

You need to use double leading backslashes in your UNC paths.

Like this: net use I: \\192.168.0.20\Smd

  • I have a samba server setup for some time now. It is a Hardware NAS - which unfortunately does not provide access to the Samba logs. (the exact model of the NAS is called Addonics NAS Adapter )

    I also have a Windows Vista and a Windows XP machine - from both I am able to map \\192.168.0.20\Smd with no errors ( net use l: \\192.168.0.20\Smd works, after asking for my username and password).

    I also bought a brand new computer, with Windows 7, and when I try to execute the same exact net use command on it - using the exact same username/password pair, I get a "The specified network password is not correct." message. I also tried mapping from the Windows explorer menu, and got the same error.

    I synchronized the clocks of the two machines, tried again... and yet the same error persists.

    So what is really surprising here is that mapping works from WindowXP and Windows Vista machines, but fails from a Windows7 machine using the exact same command and username/password - Anyone has any idea of what could be causing this or how to solve the problem? Thanks

    Dean J : Can you check the SMB server logs?
    Kara Marfia : Best to post your responses as comments to specific answers or by editing your original question (otherwise things get confused and out of order with upvotes).
  • This is probably not it, but you can try disabling SMB2 on the Windows 7 machine. SMB2 was introduced with Windows Vista so if the Vista machine works I would think the 7 machine would work as well, but it won't hurt to try it.

    Win7 Home User : I tried that. Disabled SMB2 using the commands (I am using === as a separator): === sc config lanmanworkstation depend= bowser/mrxsmb10/nsi === sc config mrxsmb20 start= disabled === but it didn't work (even after a reboot).
    From joeqwerty
  • I hesitate to post this as an answer, because it's so flimsy, but this may be a purely Win7 Home thing that people using pro or ultimate editions wouldn't see. I remember reading something about homegroups - and they may have limited functionality or a change in syntax?

    You may want to check with superuser.com if only because they may have more experience with the home version. (Home version may also mean the question belongs on SU, but I feel like the votes should decide that... seems a bit gray-area to just mod it over).

    Are you able to pull up the share by sticking \192.168.0.20\Smd in the run box?

    Garrett : Pro and Ultimate wouldn't handle networking differently than Home, but it could be a change to the networking in Windows 7 that caused the incompatibility.
    Win7 Home User : Thanks for the responses - David Mackintosh gave a tip that solved the problem!
    Kara Marfia : @Garrett - well, I guess you can't join Home to a domain? But good to know it's unchanged otherwise.
    Garrett : Right, no domain joins and some other things like being a Remote Desktop destination, bitlocker, etc... but if they both do something, they do it the same way.
  • Can you get it to work if you remove the password requirement?

    Win7 Home User : No. I tried adding a guest user with no password required - same results (works from XP and Vista. Fails from Win7).
    From Garrett
  • It could be an issue with requirements of NTLM. I've read some people have to do the following to get their Win 7 box to work with samba.

    Control Panel - Administrative Tools - Local Security Policy

    Local Policies - Security Options

    Network security: LAN Manager authentication level

    Send LM & NTLM responses

    Minimum session security for NTLM SSP

    Disable Require 128-bit encryption

    Win7 Home User : I wasn't able to find a "Local Security Policy" menu - maybe because my Windows is Home? At any case, the closest I found to this was this option: File sharing connections Windows 7 uses 128-bit encryption to help protect file sharing connections. Some devices don't support 128-bit encryption and must use 40- or 56-bit encryption. [ ] Use 128-bit encryption to help protect file sharing connections (recommended) [ ] Enable file sharing for devices that use 40- or 56-bit encryption I enabled the second option, but nothing seems to change (even after a reboot).
    Dominic D : Hrrm...I dont have a copy of Windows 7 Home Premium to verify but according to http://social.answers.microsoft.com/Forums/en-US/w7security/thread/0c8300d0-1d23-4de0-9b37-935c01a7d17a it's not available in that version of windows. I have no idea how you can modify those settings without it.
    From Mr Furious
  • Windows7 and Windows2008 r2 use NTLMv2 by default. Older implementations of Samba don't support this and will return a password failure.

    We had this exact same problem on our NAS.

    Two solutions

    1. Bug your NAS vendor to update their implementation (we've just received a patch).
    2. Push a policy change either via GPO or via Local Policy. The setting you need to modify is: Local Computer Policy -> Computer Configuration -> Windows Settings -> Security Settings -> Local Policies -> Security Options -> Network security: LAN Manager authentication level. Set it to Send LM & NTLM - use NTLMv2 session security if negotiated. This gives you the best of both worlds, better security if supported, fall back if not. This should be the default Windows7/Windows2008r2 option IMO, but for whatever reason it isn't.
    Win7 Home User : Thanks for the response - updating the firmware of the NAS did *not* work, unfortunately, but with the tip by David Mackintosh I was able to change the auth settings.
    From Dominic D
  • Dominic D's explanation of what is going on is correct: Vista, Windows7, and Windows2008 r2 use NTLMv2 by default. Older implementations of Samba don't support this and will return a password failure. Fortunately you can tell Vista and Windows 7 (and I presume Server 2k8) to use the v1 protocol if the v2 is not available.

    These are my notes for Vista, they worked for Windows 7 Pro 64-bit.

    1. Start -> run -> secpol.msc
    2. Local Policies -> Security Options -> Network Security: LAN Manager Authentication
    3. Change NTVLM2 responses only to LM and NTLM - use NTLMV2 session security if negotiated

    If you are stuck with a Vista Home, there is no secpol.msc. Instead:

    1. Start -> Run -> regedit
    2. navigate to HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Lsa
    3. for LmCompatibilityLevel, change the '3' to a '1'
    4. Rebooting might be necessary at this point.
    Win7 Home User : Thank you very much! I created a new LmCompatibilityLevel entry under the place you indicated (it was missing) as a REG_DWORD with a value of 1, and after a reboot I was able to mount the share under Windows7! I now have a bizarre problem where all folders appear as invisible when under the mapped drive when in a cmd.exe Window (again, on Windows7-only) - but I can live with this (I just hope Cygwin works properly... installing now).

What is the best TERM type on AIX for use with PuTTY, and which PuTTY settings should be tweaked?

I manage many AIX machines, generally version 5.3.

Basic terminal function works just fine, but it seems like some things don't. For example nmon displays lqqx instead of the line drawing characters.

lqnmonqqqqqqqqr=ResourcesqqqqqqqqHost=sigloprodqqqqqqRefresh=2 secsqqq11:29.
1 Memory qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqx
x          Physical  PageSpace |        pages/sec  In     Out | FileSystemCx
x% Used       97.4%      1.3%  | to Paging Space   0.0    0.0 | (numperm) 5x
x% Free        2.6%     98.7%  | to File System    0.5    1.5 | Process   2x
xMB Used    7980.3MB    26.2MB | Page Scans        0.0        | System    1x
xMB Free     211.7MB  2021.8MB | Page Cycles       0.0        | Free       x
xTotal(MB)  8192.0MB  2048.0MB | Page Steals       0.0        |           -x
x                              | Page Faults       3.0        | Total    10x
x------------------------------------------------------------ | numclient 5x
xMin/Maxperm     781MB( 10%)  3904MB( 48%) <--% of RAM        | maxclient 4x
xMin/Maxfree     248   1088       Total Virtual   10.0GB      | User      7x
xMin/Maxpgahead    2    128    Accessed Virtual    3.2GB 31.8%  Pinned    1x
x                                                                          x
xqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqx
x                                                                          x
x                                                                          x
x                                                                          x
x                                                                          x
mqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqj

I am currently using the xterm terminal type on AIX, specifying utf8 encoding in putty, Using unicode line drawing code points in putty, and using the Deja Vu Sans Mono font, which should include all line drawing characters

nmon does display correctly when I run it from an xterm on that same machine.

Current terminfo entry for TERM=xterm is as follows:

sigloprod ~ $ echo $TERM
xterm
sigloprod ~ $ infocmp
#       Reconstructed via infocmp from file: /usr/share/lib/terminfo/x/xterm
xterm|vs100|xterm terminal emulator,
    am, km, msgr, xenl,
    cols#80, it#8, lines#25,
    batt1=f1, batt2=f1md, bel=^G, bold=\E[1m,
    box1=lqkxjmwuvtn, box2=lqkxjmwuvtn, civis=\E[?25l,
    clear=\E[H\E[2J, cnorm=\E[?25h, cr=\r,
    csr=\E[%i%p1%d;%p2%dr, cub=\E[%p1%dD, cub1=\b,
    cud=\E[%p1%dB, cud1=\n, cuf=\E[%p1%dC, cuf1=\E[C,
    cup=\E[%i%p1%d;%p2%dH, cuu=\E[%p1%dA, cuu1=\E[A,
    cvvis=\E[?25h, dch=\E[%p1%dP, dch1=\E[P, dl=\E[%p1%dM,
    dl1=\E[M, ed=\E[J, el=\E[K, font0=\E(B, font1=\E(0,
    home=\E[H, ht=\t, hts=\EH, ich=\E[%p1%d@, ich1=\E[@,
    il=\E[%p1%dL, il1=\E[L, ind=\n, kbs=\b, kcub1=\E[D,
    kcud1=\E[B, kcuf1=\E[C, kcuu1=\E[A, kdch1=^?,
    kf1=\E[11~, kf10=\E[21~, kf11=\E[23~, kf12=\E[24~,
    kf2=\E[12~, kf3=\E[13~, kf4=\E[14~, kf5=\E[15~,
    kf6=\E[17~, kf7=\E[18~, kf8=\E[19~, kf9=\E[20~,
    khome=\E[H, kich1=\E[2~, knl=\r, knp=\E[6~, kpp=\E[5~,
    ktab=\t, mc4=\E[4i, mc5=\E[5i, nel=\n, rc=\E8,
    rev=\E[7m, rf=/usr/share/lib/tabset/vt100, ri=\EM,
    rmcup=\E[?7h, rmkx=\E>, rmso=\E[m, rmul=\E[m$<2>,
    rs1=\E>\E[1;3;4;5;6l\E[?7h\E[m\E[r\E[2J\E[H, sc=\E7,
    sgr=\E[%?%p1%t;7%;%?%p2%t;4%;%?%p3%t;7%;%?%p4%t;5%;%?%p6%t;1%;m%?%p9%t\E(0%e\E(B%;,
    sgr0=\E[m\E(B, smcup=\E[?7h\E[?1l\E(B\E=, smkx=\E=,
    smso=\E[7m, smul=\E[4m$<2>, tbc=\E[3g,
  • Edit:

    Aix uses two capabilities called box1 and box2. Try modifying the commands below to look for them instead of acsc.

    Previously:

    Try this command:

    for t in $(find /lib/terminfo -type f -print); do echo; echo -n "$t "; tput -T$(basename $t) acsc; done
    

    or:

    for t in $(find /lib/terminfo -type f -print); do echo $t; infocmp $(basename $t)| grep acsc; done
    

    Replace "/lib/terminfo" with the path to your terminfo files. Look for lines that do not look like this:

    ``aaffggjjkkllmmnnooppqqrrssttuuvvwwxxyyzz{{||}}~~

    One of those terminals has a good chance of working for you.

    You can set that terminal type using TERM=termtype at a Bash prompt or in your ~/.bashrc or by doing this:

    TERM=termtype nmon
    

    To have it set only for that invocation. If you'd like to set it for an ssh session from the local end, you can do:

    TERM=termtype ssh ...
    

    if AcceptEnv in the remote system's /etc/ssh/sshd_config is set to allow it. And SendEnv in the local system's /etc/ssh/ssh_config or the user's local ~/.ssh/config is set to send it.

    Joe Koberg : I am reluctant to add entries to the systemwide terminfo or termcap files, or to include files in my home directory or login scripts, because it's a collection of a dozen or more machines who's 'customization complexity' I want to keep to a minimum (and it would take weeks to roll systemwide changes through change control, assuming they were even permitted.)
    Joe Koberg : 1.) -printf is not a valid option to find on this AIX. 2.) xterm meets the above criteria and is not currently working properly.
    Dennis Williamson : I modified the command to use `basename` instead of `printf`.
    Joe Koberg : Syntax still incorrect; i added the hyphen to the `-type` option to find. No output produced.
    Dennis Williamson : I don't think it matters, but what are your remote system's `$LANG` and `$LC_ALL` Bash variables set to?
    Dennis Williamson : "No output produced" Did you check to make sure that the path in the `find` command is correct? It is sometimes in `/usr/share/terminfo`. Does `tput -Txterm acsc` produce "``aaffgg..."?
    Joe Koberg : I understood the original intent of your commands and made them work before I posted a reply. The original path is correct. "No output produced" means I got my shell prompt back without any intervening output. tput -Txterm acsc produces nothing. And there is no acsc capability in the infocmp output.
    Joe Koberg : LANG=C, LC_ALL is unset.
    Joe Koberg : Presumably if this AIX had a UTF-8 locale, using it would have fixed the problem.
  • Try setting TERM=xterm

    Joe Koberg : Term type is already set to `xterm` as stated in question.
    From Not Now
  • Try Putty's Configuration menu: Window -> Translation -> Received data assumed to be in which character set -> UTF-8

    If UTF-8 isn't it, try some of the other values.

    Joe Koberg : Turns out setting it to ISO-8859-1 works! I presumed the remote end was sending escape codes to put the terminal in line drawing mode, then sending the alternate character set `lqqk` for line drawing (http://www.vt100.net/docs/vt102-ug/table5-13.html)... and adjusting PuTTY shouldn't change that aspect of emulation. But I guess PuTTY ignores that when set to UTF-8 and looks for line drawing characters ONLY as unicode code points. Browsing the PuTTY 0.6 source seems to verify this at line 2573 of TERMINAL.C

DNS using CNAMEs breaks MX records?

We are trying to move all our websites we host to CNAMES as we are planning on moving servers in the new year and would like the ability to move some clients to one server and other clients somewhere else. We were planning on giving clients a unique CNAME which we can then change at a later date. (We have other reasons for doing this now but that is the main one)

We have been testing out this theory with a few of our own domains and it seemed to be fine. However when checking the MX records on a domain I got the CNAME value back rather than the MX record.

Sadly all of these domains are done via control panels, but I am guessing they are just writing zone files for me.

I want to create 2 CNAMEs for the company.com

company.com. IN CNAME client.dns.ourserver.com
www          IN CNAME client.dns.ourserver.com

The MX record is something like the following:

company.com  IN MX 10 mail.company.com

We have an A record for mail.company.com

Doing:

host -t mx company.com

Returns the CNAME value rather than the mx record.

Is this expected behaviour?

I have managed to get the above configuration working with the 123-reg.co.uk control panel, but not sure if that is more luck than anything.

  • This is a common error. You cannot use a CNAME RR for your root domain (e.g. company.com) and define additional resource records for the same zone.

    See Why can't I create a CNAME record for the root record? and RFC1034 section 3.6.2 for details:

    If a CNAME RR is present at a node, no other data should be present; this ensures that the data for a canonical name and its aliases cannot be different.

    From joschi

How can I remove the GUI bits from a Red Hat Enterprise Linux install?

I am looking at a farm of RedHat Enterprise Linux (RHEL) 5.3 servers, which all have GNOME and Xorg installed, none of which need them. They were deployed by a 3rd party from a VM template, and I don't know all of their history. What I do know is none of them run an application that actually requires having a full GUI installed. However, it is possible, that some run an application that requires some X libraries (ImageMagick comes to mind).

According to yum grouplist, the 'X Window System' group is not installed, so I can't use yum groupremove here.

Is there a sufficiently low-in-the-dependency-chain package, or packages, that I can remove, which will pull out Gtk, GNOME and Xorg? Alternatively, if it generates a list of packages to remove before starting, we can reinstall the applications we need, which will pull back the X libraries, when we are done.

  • I haven't done this with real, live RHEL, but I have pried X out of CentOS 5.1 and 5.2. (I've been pulling X off of Redhat-derived distros for years... ever since the dependencies were made such that you, basically, had to install X, whether you wanted it or not.)

    I don't recall the exact dependencies, but, as I recall, there are some annyoing dependencies that require a "--nodeps" argument to RPM in order to get the offending RPMs to remove. I just start ripping out packages I don't need, adding more and more packages to the "rpm -e" command-line, and finally adding "--nodeps" when necessary.

    I don't know that I'd recommend doing this for production machines. I don't deploy any quantity of CentOS in production environments, so it's probably alright that I potentially screw up my installation. In a production environment, disk space is cheap. I don't like having unnecessary software installed, from a security perspective, but The Right Thing(tm) is probably to rebuild the packages with offending dependencies (without the offending dependencies, obviously) rather than just ripping out and potentially making a system unusable.

    crb : I agree with your "the right thing" assessment, but at this point, I'd like to investigate a smarter answer, at least for our dev/staging environments. If you were able to remember any dependency problems you had it would be great.
    Evan Anderson : I'm not sure what you mean by "a smarter answer". Anything other than rebuilding packages w/o dependencies is going to leave you with broken dependencies as far as RPM is concerned. You'll need to test all your applications to insure that they function w/ the broken dependencies, since anything you install is going to assume the dependent packages are there. If it'll help, I can send you the output of an "rpm -qa" from one of my CentOS 5.2 boxes. I didn't really keep notes of what I've removed, since I don't deploy CentOS in production to my Customers.
    crb : Sorry for not being clear; I meant "smarter" solely in terms of not taking as long to rebuild entire machines and remigrate. Ultimately, I want to be able to say "if I remove package A, packages W X Y and Z" will be removed, and if I know I don't need W-Z, I can go ahead and remove A safely. I suspect that using yum here would be safer than plain rpm.
  • I am doing basically the same thing at the moment. My method is mainly manual, due to the lack of tools for this, but it might be of help.

    First, deploy a new server with the correct list of packages you require, i.e. without X and Gnome. Then, diff the package list on the old and the new server. It is not wise to just try and remove the whole diff from the old server - you never know what'll break - but it can be a start. Take some big packages from the diff that you are sure will not break stuff (like nautilus) and start from there. Try a rpm -e --test on the compiled list, rinse, repeat. The final list can then be used on the other servers fairly painlessly, given that the servers are all similar.

    I heartily agree this is not a nice, clean, standardized way of doing this, but I value removing the Gnome and X crud from my servers more highly than having some streamlined process to get there. Mind, btw, that I didn't install these servers, I am merely cleansing them. ;-)

    We only remove the the packages during patching downtime, so we can test the app (Oracle, mostly) directly after removing them. In case of breakage, we yum install the list and try again with a smaller subset. Not that that ever happened, but you should be prepared for the worst. Like Evan said: this is risky business.

    My main target is to remove the bigger X apps from the servers (like, again, nautilus, firefox, openoffice, etc.) mainly for the reason of decreasing the security footprint. The fact that some smallish apps will possibly remain installed is fine with me - for now - because we are 'catching the bigger fish', so to speak.

    From wzzrd
  • I made it work with Kickstart. If you create a kickstart config file you can exclude base from the packages definition and get a really minimal install. I think it was so minimal it didn't even have yum and a few others, and I had to add those packages back in.

    From mfarver
  • You might also consider just not starting the X server / GDM at boot and leaving the packages there. I guess they will take up some space and add time to updates, but other than that I wouldn't think they will cause any issues.

    For your situation you might really want them removed, or you might have already considered this, but I just though I would put it out there :-)

    crb : Thanks Kyle, that's where we are at, at the moment; I hoped that this lovely dependency-resolving system Red Hat has would make doing the tidy up trivial.

An easy way to see what in /etc/init.d/* is running?

I left my laptop cable at home, and I'm running on battery. I'm using Ubuntu Linux 9.10. I know about powertop and I'm using that. It told me that a few things I'd installed (postgres, mysql, etc) were running, so I stopped them.

However is there a command that'll tell me all the things from /etc/init.d/ that are running? I can then decide to stop some of them.

  • One would hope so, but I know of no tool that does this. The problem mainly lies in the fact that - at least on Ubuntu - a lot of initscripts do not have a 'status' command. So, running a snippet like this

    for service in /etc/init.d/*; do
        "${service}" status
    done
    

    will not work, because you will be spammed with error-messages ad nauseam, telling you the status command does not work for a particular service.

    You could do something similar with pgrep, but you would need to script a little something and know the names of the actual processes that are started by the init scripts.

    Kyle Brandt : Matters less here, but I think for service in /etc/init.d/*; do "${service}" ... would be better. Eliminates the need for call to ls, and is safe for file names with spaces in them (although, /etc/init.d is not likely to have something with a space in it).
    wzzrd : @Kyle: +1 Good point, will do a little edit.
    From wzzrd
  • On Ubuntu 9.04/9.10:

    sudo service --status-all
    

    For everything that responds to 'status' you'll get a +/- flag, and ? if they don't.

    edit You can also install 'chkconfig' to see what is set to start in the various run levels.

    CK : That's why I said to install chkconfig! Because I run a variety of differeny *ixs at home and work, I checked --status-all on a 9.04 Ubuntu before I posted. Obviously, it could have been removed in 9.10, but it seemed unlikely.
    wzzrd : Right. Apparently my version of Ubuntu here is too old and doesn't do status-all. If new ones do, I should shut the hell up ;-). Do a minor edit to your answer, so I can revoke my downvote.
    CK : :) Any thoughts on an edit - it looks pretty clear to me.
    CK : Edited to include known working versions of Ubuntu.
    wzzrd : Here's your point back :-)
    From CK
  • The best way to see what is actually running is to run 'top'. Pressing 'm' will sort processes by memory usage. Note that the 'vmsize' column is usually overstated by about 90Mb.

    wzzrd : You don't want to look at vmsize. You want to look at RES or RSS (depends on version of top). The resident set size shows the amount of actual physical RAM a program uses, which is a lot more interesting than the VMSIZE. And even then, the value of knowing a program's RSS is debated (Google it). Anyway, in this case, I'd use ps, not top. Top rarely has enough room to show all processes.
    From pjc50