Saturday, January 29, 2011

Are there any available open source CMS's out there that output valid html 4 ?

So far, the only one I've found is the excellent tangocms.

However, although tango is simple enough for me to use, I'd like something simpler for use by non-programmer clients.

Silverstripe can be made to use html 4 templates, but one needs to be a very good php programmer in order to implement it.

  • I had good result using Drupal and Wordpress.

    Wordpress is more of a blog engine but Drupal on the other hand is versatile and quite mature and stable, you can easily add an administration section for clients. There is also a lot of useful modules to choose from.

    As for valid HTML, I've never had this kind of issue.

    Good luck!

    EDIT HTML4 is already 10 years old so you might want to look at old CMS. Any CMS still using HTML 4 is unlikely to be popular.

    Arjan : XHTML is over 10 years old as well...
    Chris S : What do you think is broken about xhtml? What specific features are you using?
    artifex : We are using drupal. While it may be heavy, knowledge wise, when you implement it. People with basic html(or even less) knowledge can use your product, if done properly.
    From Embreau
  • I've used http://ckeditor.com/ before, and it's pretty easy to use for non-techies.

    If you want something to manage your entire site, Wordpress is about as simple as they come to get going, and has lots of extensions that do a variety of clever things.

    From Will
  • Have you considered Plone? It validates.

    From
  • Isn't the reason why xhtml appears broken in IE because IE won't adhere to certain html standards, or wont follow those standards? For instance, if you view the code in IE it appears broken, but in Firefox or Opera it doesn't.

    Back to the original post; is it possible you could just use a different Internet Browser and then it wouldn't matter if xhtml was used?

    Jason Antman : Most of the web development world is moving to XHTML. Very rapidly. At the moment, it's considered the de-facto minimum standard for sane development. Are you using a recent version of IE? Are you trying to do something very strange with the page? My personal site and blog are valid XHTML 1.1 Transitional, and appear fine in IE8 (or so I'm told, I've never used it...)
    Webs : My statment was more to the OP than anything else. I dont' use IE so I don't have any problems ever viewing XHTML. But users that view my website in IE have a much different experience. So my question to the OP was can he use something else?
    From Webs
  • MODx gives you 100% control over the HTML, including the HTML that the various widgets, plugins, etc. output, so the quality of the HTML output is entirely within your control. Drupal is the same way, other than what comes out of the WYSIWYG editors. Those are all plugins, so it's up to you to pick one that is good for your needs.

    J.Ja

  • I'm a long-time user & fan of Textpattern, and have used its tag-based template system to produce valid HTML4, XHTML (1.0 & 1.1), and HTML5 sites. There may be certain tags and/or plug-ins that prefer to render more modern, XHTML-like, code at this point, but you should still be able to generate valid HTML4 relatively easily.

    From morgant

Enterprise user management

I am looking for an enterprise user management system that meets these requirements:

  • Delegated user administration: The group manager should be able to grant access to his supervised employees (without having to contact any administrator either to grant access or maybe create users).
  • A group manager should be able to create other groups and restrict any permission he already has where he can add supervised employees.
  • If a manager removes access to a supervised group, then all the subgroups will also lose access.
  • Web based User Interface.
  • LDAP interface to query users and groups (or may not exist at all if it is integrated in a single application).

Do you know if there are any system that meet these requirements?

  • I'm working in an environment where there is a system installed with similar funcionality as the one you describe. I don't know if it fullfills all of your requirements, but it might be worth having a look at it: http://www.nordicedge.se/

  • Active Directory has all the functionality you requested above built in to its management tools. Novell also has a centralized directory/authorization program called eDirectory

    mfinni : Well, except for the web-front-end part. While one could be built, it isn't out-of-the-box. And he doesn't explain what he's granting access *to* - whether it's SharePoint, or Exchange mailboxes, or NTFS files. That part would have to be built into the web app as well, those aren't defined in AD, they only use AD.
    Eduardo : @mfinni The applications that will use the system just define some roles and the system should just tell me if the user has the role or not
    From Josh Budde
  • We use Computer Associates Identity Lifecycle Management product. It works well, but does require a large investment to install and maintain.

    From JD

Windows Server 2003 can't see Vista machine

Hi there, I've got a real PITA problem that I'm sure has a really simple solution. I have a Windows Server 2003 machine that needs to be able to see the network name of a Vista box - but refuses to. It can see the Vista box (and even access its shared folder) if I enter the Vista box's IP address.

Problem is: SQL Server refuses to do Replication with anything other than the "actual server name". That means that the 2003 machine needs to be able to connect through the Vista machines network name... not just its IP address.

I'm guessing it's a simple incompatibility between OS's, but I'm sure there's got to be a simple way of fixing it.

Note: Yes, the Vista machine can connect to 2003 machine, no problem. And other machines in the office can connect to both the Vista machine and 2003 (they have more recent OS's).

Thanks for any help!

  • Try turning the firewall off on the Vista machine, reboot it and see if it shows up. Second thing, check the Network type, set it to Private if it is set to public. Reboot and check if the Server "sees" it.

    Django Reinhardt : Thanks, all Firewalls are disabled (we're trying everything), and as I said, other machines can connect to the Vista machine no problem. Is it not likely to be an OS compatibility? :-/
    dimitri.p : if the server "sees" other Vista machine, check DNS settings of the one that is not showing up against one of the others. I'm going to guess :) that The server is all up to date as well as the Vista machine with Windows updates
    From dimitri.p
  • If other machines can connect to it than it's proabably more of a SQL issue. Did you run the User Provisioning Tool for Vista on the workstation after install SQL? If not, give that a shot. It's located at %ProgramFiles%\Microsoft SQL Server\90\Shared\sqlprov.exe by default.

    Also, make sure the SQL Browser service is running on the Vista machine.

    Also, what version/edition of SQL is on the Vista machine?

    Django Reinhardt : It's not just a SQL thing because we have a shared folder setup on the Vista machine (x32 Ultimate, BTW), and it can't be accessed by the OS entering the network name (unlike with other machines). It's a real head scratcher :( Will give the SQLProv a shot, though. Thanks.
    From squillman
  • Looks like a DNS issue.

    Probably the vista machine's name can't be resolved via DNS, so NetBIOS is used... which, as everyone knows, may or may not work, depending on a lot of factors.

    Can you ping the Vista machine using its network name?

    Django Reinhardt : Aha! No, I cannot ping the Vista machine from the 2003 machine! (But I CAN do it from the other machines that can connect to the Vista machine.) How do I fix this? Do I need to install anything? Thanks so much!
    squillman : +1 Yeah, this is more likely the cause.
    Django Reinhardt : If I manually edited the HOSTS file, would that fix it? :)
    Django Reinhardt : Editing the HOSTS file *did* fix it... but I'm wondering if there's a better solution to do with fixing the DNS issue? Thanks!
    dimitri.p : Did you check the network settings against a machine the server "sees" ?
    squillman : @Django: Verify that there's an address record in DNS for the Vista box and that the 2008 box is using the right DNS server(s). Try removing the hosts entry then doing a `ipconfig /flushdns` on the 2008 box.
    dimitri.p : 2003 machine :)
    From Massimo

How to physically secure a public terminal?

An organisation that I do work for has made the decision to move several public access terminals (Tower/Monitor/Keyboard/Mouse combo) into a public place.

These machines are already secured with change preventing software (either DeepFreeze or SteadyState) but will now live in a publicly accessible area with minimal observation.

What are the best ways to physically secure machines like this against theft? Are there any additional software security mechanisms which should be considered? What about securing them against the theft of peripherals?

Suggestions and advice would be appreciated.

Edit: The machines are not in the main flow of traffic (they are not on the main floor) but the area they are in is moderate traffic, and accessible for a good portion of each day. We're using PCs to be terminals because the budget for this is very small so we are recomissioning old machines to serve the purpose.

  • HP and Dell sell brackets or locks for most of their PC to make them harder to steal. Get a good salesman from either, or a resale outfit like CDW, they'll make the process much easier.

    There are also companies that make hardened PC rigs for public use. Usually made out of all steel and acrylic so they're much harder to damage. Again, find a larger resale company and start asking.

    Also, if it's really public then you can expect them to be valdalized and stolen on a regular basis no matter what you do. It's simply a cost of doing business, and you have to weigh the gains against the losses.

    From Chris S
  • Cable locks for the tower and monitor - I know most laptops have little divots for a standard cable lock to fit into, you should look into monitors and towers that also have them. Then you loop them around the desk if you can. If you already work with someone selling you office furniture, you can buy them with cable-lock-friendly bits on them.

    Unless you have a good reason to use them, epoxy all of the unused peripheral ports. Have someone do a regular check for physical keyloggers getting attached to the keyboard. Unplug the floppy drive ribbon cable, if applicable.

    From mfinni
  • You could get an integrated kiosk solution. I've never specced one of these, but there's a whole bunch of suppliers on the net who are more capable and specialised.

    Ok, maybe that's outside your budget.

    You could put a standard PC in the view of the public, but you'd have to really lock it down.

    • Physically disable the USB ports, either disconnect the headers from the motherboard, or fill the ports with epoxy resin.
    • Put the entire unit in a lockbox, only the essential access is available, ie, holes for the keyboard and mouse cables.
    • Instead of a mouse, how about a trackpad or trackball, these are easier to bolt down, and potentially harder to steal.
    • Get a highly robust keyboard, one that can take a good bashing, preferably waterproof too, in case anyone spills their coffee on it!
    • You'll need a secure firewall too, as well as as much software security you can get your hands on.
    • Preferably, write your software to require a dongle for it to run, so they can copy the executable files, but they'd need the hardware key to make it usable.

    At the end of the day, people will vandalize your hardware, they'll steal the mice, cut the cables, steal the caps of the keyboard.

    Weigh up the cost of replacing these weekly, or buying a hardened kiosk, then make the decision based on that.

    Zephyr Pellerin : I just thought I'd add on that you can get some of those flexible keyboards. ( http://www.google.com/products/catalog?hl=en&client=firefox-a&hs=MUC&rls=org.mozilla:en-US:official&resnum=0&q=Flexible+keyboards&um=1&ie=UTF-8&cid=8674251919358054865&ei=3B6MS86VNoXitgOakJiFAw&sa=X&oi=product_catalog_result&ct=image&resnum=5&ved=0CCcQ8gIwBA# ) Have one around here and they are nigh indestructible.
    Tom O'Connor : They look pretty groovy, i bet they're no match for a stanley knife though!
  • Hi, if you're willing to switch to Mac using a software like eCrisper (http://ecrisper.com), then you could use http://www.ianchor.net/

    Tom O'Connor : Interesting, but why not just drill through the "foot" aluminium, and bolt it to the desk?

How do I setup JBoss 5.1.0.GA to run multiple instances?

Does anyone have any experience or advice in setting up multiple JBoss 5.1.x instances on the same machine that has 1 network card?

Here is what I did:

  1. Installed JBoss 5.1.0.GA into c:\myjboss 1.5. I copied the server/default directory to server/ports-01 and server/ports-02 so they have their own config. did I assume correctly?
  2. Ran .\run.bat -c ports-01
  3. Ran .\run.bat -c ports-02

At this point there are 2 instances but the second instance doesn't load correctly because of what is probably a few port conflicts. For example: the http port ends up being 8080 for both instances, which it gets from line #49 in the C:\myjboss\server\all\conf\bindingservice.beans\META-INF\bindings-jboss-beans.xml file. Earlier in the server load it clearly gets the value from line#63 in that same file. I don't know why it gets part of the port config from line #49 and the other part from line#63. Confused.

I also tried: .\run.bat -Djboss.service.binding.set=ports-01 -c ports-01 and it made little difference.

Any ideas on what I am doing wrong?

  • I got it working on my own. The answer were these commands:

    .\run.bat -Djboss.service.binding.set=ports-01 -c ports-01

    .\run.bat -Djboss.service.binding.set=ports-02 -c ports-02

    Also, I had to copy the server/default to 2 new directories called server/ports-01 and server/ports-02 ...

    Then , in the server\ports-01\conf\bindingservice.beans\META-INF I had to remove references to instances ports-02, ports-03, and "default" from it.

    Then , in the server\ports-02\conf\bindingservice.beans\META-INF I had to remove references to instances ports-01, ports-03, and "default" from it.

    Then, finally, I deleted the "standard", "web", and "default" directories from the default installation in the server directory.

    Then, I ran both servers with the commands above, and out-of-the-box, they work.

    Also, here is a batch file to run clustered instead of separate instances:

    @echo off
    
    start .\bin\run.bat -c ports-01 -g MyLocal -u 239.255.100.100 -b 127.0.0.1 -Djboss.messaging.ServerPeerID=1 -Djboss.service.binding.set=ports-01
    
    @echo Wait until first server finishes starting and then hit 
    @echo any key to start the second server in the cluster...
    pause
    
    start .\bin\run.bat -c ports-02 -g MyLocal -u 239.255.100.100 -b 127.0.0.1 -Djboss.messaging.ServerPeerID=2 -Djboss.service.binding.set=ports-02
    
    : that was very useful info.
    djangofan : thanks! i finally got over 1000 points. ;-)
    From djangofan
  • Instead of using the BindingManager, you can assign more than one adress to your network interface (at least if you're in control of your network) and let each jboss instance run using its own adress (using the -b option to run.(bat|sh)). This is also possible on the local loopback interface (127.0.0.1, 127.0.0.2, ...).

    From mafro
  • Quick follow up to Mafro's post about multiple IP addresses - depending on how MANY instances you want to run on a single host, the multiple IP addressing scheme may be the most stable. Once you start getting to 4-5 JBoss instances on a single host (and also depending on which J2EE services you've got enabled in your app, if any) you may soon run into port conflict issues when you use the "ports" implementation.

    Multiple IP addresses will allow you to run all of your J2EE and JBoss services on their default ports, and avoid many of the "hunting down a port conflict scenarios" you encounter when running more than one instance.

    A final note, according to the JBoss wiki, using multiple IP addresses is the "preferred solution" especially for Production environments - http://community.jboss.org/wiki/ConfiguringMultipleJBossInstancesOnOnemachine. See that link for additional resources on using the Binding Manager to configure JBoss services and applications to avoid conflicts.

    djangofan : nice link, thanks. aside from the document, how exactly do you establish a second IP address on the same network adapter? how would i assign a second static public IP?
    mafro : Here's a guide for windows: http://www.itsyourip.com/networking/how-to-add-multiple-ip-address-in-windows-2000xp2003/
    BJ Hoffpauir : It's not a definitive guide, but I found a pretty good explanation of not only how to do it, but why it works this way on the Linux Help Blog - http://linuxhelp.blogspot.com/2005/05/setting-up-multiple-ip-addresses-on.html
  • If you don't use RMI or remoting, you can actually tweak the configuration of jboss to only use one port. It is very annoying work to do (tons of configuration files), but possible if you really need to.

    To do so: - remove all services that you don't use - if you can't remove an invoker, you can probably set transport="local" so it will use in memory transport - set the ports of the remaining service to -1 - Set the following system properties to disable arjuna management ports: com.arjuna.ats.arjuna.coordinator.transactionStatusManagerEnable=NO com.arjuna.ats.arjuna.recovery.recoveryListener=NO

    Configuration files you absolutely need to change: - jboss-service.xml - disable all services you don't need - legacy-invokers-service.xml (remove legacy services if possible) - messaging/messaging-bisocket-service.xml : change transport to local instead of bisocket

    There will be a few more files.

    What is left is a jboss which listens to the web port and one other randomly chosen port for which i don't yet know the use. This will make it easy to run multiple instances on one host.

    djangofan : would love to have more info on this theory... running JBoss with only 1 port would be quite nice I think. my company has completely hand-rolled apps with our own separate service ports and so being able to turn off most of the JBoss ports would be awesome.
  • how compile application jboss 5.1.GA i not found ant toll

  • You can also configure multiple jboss instances renaming run.bat and creating a new run.bat running -c instance-name . Then, you'll be able to start Jboss as service and calling run.bat properly.

    Igor Monteiro.

    djangofan : how does that handle the peerID for the clustering? it seems like you need to pass the peerId option for that to work.

Monitor turns off about 1 second after turning it on

Hi all!

I have a hardware issue with some of the LCD monitors we have in our office. My problems is not related to video cards or anything else with the computer itself.

I have 2 Dell 17" LCD screens that goes off (blank) after 1 or 2 seconds. The light remains green so it's not idling or sleeping. Just this morning, I had to replace one that goes off on 2 workstations and also on my laptop. Nothing to do with it. I tried the VGA and DVI connection w/out luck.

I strongly think that this is something with a capacitor or something inside the screen but I can't figure out what it is...

Is anybody heard of that kind of issue before?

Regards,

David.

  • It is most likely the inverter that provides power to the backlight failing. This is the first relevant link that turned up in a quick search.

    If, when the screen dims, you can see a very faint image of what should be clearly visible then the backlight failing is the most likely cause and the behaviour your describe (being on for a short time) points at the inverter being to blame. It is a relatively common way for an LCD monitor to misfunction.

    The inverter is usually replaceable though depending on the monitor in question and your level of (or access to) expertise it may well be easier and/or cheaper to replace the monitor. Obviously if the monitors are under warranty you need to contact the manufacturer or your supplier for replacements.

    My five year old 19" LCD went this way and I used it as an excuse to upgrade. Given how many people queued up to take the old one off me via FreeCycle it was certainly worth, to some people, the time and materials cost of trying to repair - your mileage may vary.

  • Well as he stated, the light didn't go into standby, and there was no picture displayed. That means there was not a faint image.

    It's 2 capacitors that are the problem. It's a known issue that isn't very wide-spread. If you take the backing off of the LCD you will notice two capacitors ont he board popping out. Those are the ones that need to be removed and replaced.

    From Krazie

How to check my linux server isn't spamming

I'm worried about dodgy php scripts or other malicious software on my linux server sending out spam. Or maybe I left an open relay

What are the ways to check I'm not sending any spam out?

  • One of the easiest ways is to check your /var/log/maillog (default location) to see if it's sending out mail that you're not expecting.

    aidan : That's exactly what I'm looking for, Thanks! (strangely, that file doesn't exist on my server though)
    AliGibbs : Well- depending on your setup, it might be elsewhere (try a search)- else, could it be that you haven't sent any mail yet?
    aidan : might be mail.log in ubuntu. looks clear.
    AliGibbs : Looking at https://help.ubuntu.com/8.04/serverguide/C/postfix.html it does seem that the default mail location is /var/log/mail.log Might be worth writing a test php mail script (or I have one if you want) to check its logging to this location
    Jacek Konieczny : The MTA (mail transfer agent) installed on the server may not be used at all, and the spam may still be sent from the machine by other means: e.g. misconfigured proxy server or malicious software running on the machine and sending the emails directly (not using local MTA).
    From AliGibbs
  • I've used abuse.net in the past to check that my server is not an open relay.

    Not used it for a while though, but gotta be worth a quick test if you're worried/unsure.

    aidan : That's a useful link to have. Using it now. thanks.
    From Grhm
  • Do you have PHP scripts on your server that make use of email? Make damn sure that those don't allow visitors to specify the address that mail is sent to. That means not having To fields in forms that create email.

    That alone is not enough, as spammers can inject mail headers into poorly written mailer scripts. Check out http://www.alt-php-faq.org/local/115/ for a discussion on this.

    You may not have control of all the scripts on your server, so you may want to read http://ilia.ws/archives/149-mail-logging-for-PHP.html which gives details of a PHP extension which logs all use of the mail function. That will give you a specific place to look for PHP related mail activity, which may be useful if you also send mail legitimately from this server.

    From dunxd
  • The best way is to monitor traffic generated by the machine. This may show if something suspicious is happening no matter what is the source of the spam (is that badly configured mail server, badly configured proxy or some malicious software). Especially take look at outgoing connections to port 25. If you can see much more such connection than mails the machine is supposed to send, then the machine is probably abuse. But closer inspection you can also find many 'MX' queries (sent to find victims' name servers) or suspicious incoming connections (used to control a 'trojan horse' software).

    Next step is to find the abused service and fix it.

    aidan : Sounds good. What's a good way of monitoring the traffic on port 25? Wireshark? (I've only got a CLI - no GUI)
    Jacek Konieczny : Anything will do. Wireshark (it has simple text interface too), tcpdump (you can write a dump file, and then open it somewhere else with Wireshark GUI), iptraf (will show what is going on 'on the wire' with quite visual form, sill text console).

Microsoft Browser Choice in a corporate environment

It seems that Microsoft has enabled their Browser Choice system for EU customers today, several news outlets are reporting end users seeing the message and there are screenshots coming through on Twitter.

I can't seem to find any information about how this will affect corporate users, for some reason the blog post on Microsoft.com is blocked by our up-stream provider. All of our users operate in a Least-Privileged environment so offering a choice of browsers is just going to cause pointless support calls.

Questions

  • How will the Browser Choice affect corporate users?
  • It is possible to disable this on a corporate network, through Group Policy?

Comment

I have summarized some of the points brought up below:

  • Transparant Windows authentication is not supported by Third Party browsers, meaning SharePoint will require a login. In our case just using the interent would require a login as we use Transparant Windows Authentication to authenticate users against the proxy.

  • ActiveX and VBScript - many legacy peices of software were written for a world where IE was the only choice, this can be mitigated by a Supported/Unsupported model giving end-users the choice but putting some restrictions on it.

  • Group Policy integration - there are ways of getting Proxy Settings and security certificates into Firefox through group policy and start up scripts, even if we were to say that practically only three browsers would be used (Firefox, Chrome and IE8) that is still a huge swathe of extra testing and configuration.

  • integration with WSUS - FireFox is updated fairly regulary with security updates, at home this isn't a problem as I can elevate to an admin user to install the update, does Firefox give corporate systems administrators any mechanism for disabling Firefox update notifications.

  • If it makes corporate IT policies shift to installing more than one browser so that corporate development doesn't lock-into one browser like happened with IE6, and corporate users are not treated like kindergarteners, then it's a good thing.

    If all it does is generate "pointless support calls", then either a) the users aren't that smart, or b) the corporate IT policies are not being updated to realize there are choices.

    Richard Slater : Unfortunatly we don't have any choice but to use Internet Explorer, many of the software packages we are legally obliged to use are IE only (due to ActiveX or just poor code). We tried making another browser available (Firefox), however it was confusing for the end users having to select which browser to use based upon what they wanted to do.
    From warren
  • I don't necessarily think it's a good thing. There are perfectly valid reasons why corporations at least prefer to standardise on a single browser (whichever browser that be), including having a known-good baseline across all PCs. If you ever work anywhere where - for example - something like Oracle financials is used, you will understand at least part of what I'm saying.

    There are also other perfectly valid reasons why IE may well be the browser of choice for a corporate environment, including factors such as integrated Windows authentication, support for ActiveX and VBScript, Group Policy integration, integration with WSUS, and so forth. Unless and until alternative browsers start offering these facilities then I'm afraid to say that alternate browser choice (for corporate users) is little more than a pipe dream, and will generate havoc on the helpdesk side.

    Richard Slater : The havoc on the helpdesk is what I am most concerned about, equally if this kicks in during a online exam that is being assessed it could cause us to fail the assessment.
    From mh
  • I like the supported and unsupported approach. Since IT probably has limited resources, you can officially support IE, since that is what some of your apps require, and then unofficially allow users to use other browsers like Firefox and Chrome. So as a department you might be willing to install those browsers if they need Admin rights, but that is as far as IT will go.

    The result is that your IT requirements stay the same, but more advanced users don't get upset that they can't use what they want to use. For the people that want to be a power user, but then need help using there browser, tough luck :-)

    Richard Slater : I like the supported/unsupported model, we use that with some of the weirder software we are asked to install.
    Grhm : One major issue with this is the security updates. In a corporate environment, someone is usually reponsible for ensuring clients have the relevant security updates installed. Allowing multiple browsers, officially or not, means opening up multiple potential attack routes. For many companies that is too much risk, and generally the larger the company the more risk averse they are.
    Richard Slater : The more I think about multiple browsers on the desktop the more I feel that it just needs to be blocked and stick with IE8+.

Best practice for assigning private IP ranges?

Is it common practice to use certain private IP address ranges for certain purposes?

I'm starting to look into setting up virtualization systems and storage servers. Each system has two NICs, one for public network access, and one for internal management and storage access.

Is it common for businesses to use certain ranges for certain purposes? If so, what are these ranges and purposes? Or does everyone do it differently?

I just don't want to do it completely differently from what is standard practice in order to simplify things for new hires, etc.

  • RFC1918 details the 3 IP blocks that are reserved for private address space. The 2 common ones are:

    • 10.0.0.0 - 10.255.255.255 (10/8 prefix)
    • 192.168.0.0 - 192.168.255.255 (192.168/16 prefix)

    If you're setting up a separate network for storage, it would probably make sense to choose an IP range similar but slightly different to what you are using for regular networking. Consistency is good, but using different IP ranges allows you to be connected to both networks simultaneously, for example if you need to look something up while doing management with your laptop?

    Tauren : So my laptop gets an IP number in the 192.168.0.x range from DHCP. I'm thinking that my storage network should be in the 10.x.x.x range to keep them really separate. Is this common practice, or do many places use something lik 192.168.1.x for their storage?
    Antoine Benkemoun : 172.16-31/16 also =) Not much used though.
    pulegium : @Tauren: 192.168.1.x/24 is as equally separate from 192.168.0.x/24 as 10.0.0.x/24 is. It can't be "more" or "less" separate. They are on different subnets, full stop... :)
    pboin : That's true for computers, but not for the people that work on them. Keeping staff members non-confused is a good thing, and naming standards go a long way towards that.
    Tauren : @pulegium: yes, I understand they are actually separate, but I meant in the "human sense", like @pboin mentions.
    From Nic
  • There is about as much consensus on IP addressing as on server names (see this site ad naseum) it just comes down to personal preference - typically of the first guy to set it all up!

    No there is no proper way of doing it - simply pick one of the the 3 RFC1918 ranges (cheers @Nic Waller), split it into subnets (traditionally /24s but /23s are becoming more popular). Assign one of the subnets for public access and one for private - job done. Really the hard part is setting up the VLANs and ACLs.

    Personally I prefer using the 10.x.x.x range as I can type it quicker than the other two, but really it makes no difference unless you need the larger size (192.168.x.x gives you 256 subnets of 254 IP addresses whereas 10.x.x.x gives you 65,536).

    I would not suggest mixing the ranges for instance having 192.168.x.x for private and 10.x.x.x for public, technically it shouldn't matter but it would be very confusing.

    Tauren : @Jon, thanks for your suggestions. this helps confirm most of what I thought was the case.
  • Most systems I've seen attempt to map the IP ranges to a hierarchy of geography and/or system components.

    One employer tended to use:

    10.building.floor.device (with non-user resource VLANs using 10.x.100.x to 10.x.120.x)

    and

    10.major_system.tier_or_subsystem.component

    Tauren : @caelyx: this sounds like a good approach that I could make use of. thanks!
    caelyx : @Tauren - no worries; happy to help! Thanks for the upvote :)
    From caelyx
  • One thing I would suggest is to use randomly selected private ranges from the 10.0.0.0/8 block for all of your private addresses. This avoids lots of problems, particularly when setting up VPNs between home/partner networks and your corporate network. Most home routers (and many corporate setups) use 192.168.0.0/24 or 10.0.0.0/24, so you'll spend hours sorting out various connecticity issues when you try to establish connectivity between two private networks.

    If, however, you chose a random range like 10.145.0.0/16, and then subnet from there, it is far less likely that you will "collide" with a business partner or home network's private IP range.

    : for site addressing you could subnet 10.0.0.0/24 and encode the longitude and latitude in the the spare octet's. ;-)
    rmalayter : Unless your sites are less than one degree apart. We had two offices a few city blocks apart at one point, which are less than 0.02 degrees apart in terms of lat/lon ;-)
    From rmalayter

[Unix - NetBSD] Troubles installing KDE with pkgsrc

Im in school right now and I have taken two classes, Networking and Unix Development, that focus on C programming in Unix. Specifically we have been using NetBSD for our machines that we develop on (rather our programs must work on NetBSD). Well our school network has been really finicky as of late and I haven't been able to SSH in. I thought this would be the perfect time to create a NetBSD box of my own because 1)my programs must compile and run on NetBSD and 2)I really don't know how to manipulate/operate a Unix environment (although I understand the internal workings).

With that being said I set out on getting NetBSD working today since its my off day. I have learned a ton about operating NetBSD/Unix (I guess I never really knew much) but I am stuck on trying to install KDE right now. I would like to say that my Google searches were successful/resourceful but I am afraid they weren't. I don't know if what I was searching was to vague or not the right thing, but here I am looking for help.

I am using pkgsrc to install the binary of KDE 3.5.10. When I use pkg_add kde-3.5.10 it starts doing whatever it is supposed to do (I don't know the optional command args to make pkg_add report on what its doing). It seems to be working for ~5mins but then fails and gives the following errors:

  • pkg_add: Read error for lib/liblcms.so.1.0.18: Premature end of gzip compressed data: Input/output error
  • original MD5 checksum failed, not deleting: /usr/pkg/lib/liblcms.so.1.0.18
  • pkg_add: Couldn't remove /usr/pkg/lib/pkgconfig/lcms.pc
  • ...
  • pkg_add: Can't install dependency lcms>=1.12nb2
  • ...
  • pkg_add: 1 package additino failed

I really have not ideas what those errors mean. Any error that is ... is the same error as above but with a different path/dependency (let me know if you want to see them all).

The steps I took to the point to where I could actually try and install KDE were:

  • Install NetBSD 5.0.1
  • Use dhcpcd with one of my network cards
  • Setting the appropriate environment variables and getting pkgsrc via CVS
  • Setting the appropriate environment variable for the location of binary files
  • Executing pkg_add

I'm sorry if this is a trivial error and something that I should be able to figure out on my own, but today was the first day I attempted to install Unix/Linux ever. All the programming assignments I had done up to this point just required me to SSH into a server, use an editor (Emacs) to write my code, and compile it with a Makefile. Any help, tips, pointers would be GREATLY appreciated. :D

Thanks again for your help.

On a side note I didn't know if I ought to post this on ServerFault or SuperUser. If these kinds of questions are more geared towards SuperUser, please let me know and I will post future questions there.

  • Do you have all of the binary packages that kde depends on available (like lcms)? It's not enough just to have the kde package. You can set up your machine to use a remote repository, see:

    http://www.netbsd.org/docs/pkgsrc/using.html#using-pkg Section 4.1.2 in particular.

    Chris : I do. Before I use pkg_add I set an environment variable PKG_PATH="ftp://ftp.NetBSD.org/pub/pkgsrc/packages/OPSYS/ARCH/VERSIONS/All". I am presuming that when I use pkg_add it goes to this remote directory and gets the binaries it needs there. Or am I wrong about assuming this stuff. I did in fact follow those instructions in the link you provided.
    quadruplebucky : You're replacing "OPSYS/ARCH" with the appropriate stuff? (the NetBSD docs appear to need updating here....) try PKG_PATH="ftp.netbsd.org/pub/pkgsrc/packages/NetBSD/i386/5.0.1_2009Q4/All" export PKG_PATH change i386 if you're on a different architecture

Apache not handling python scripts (*.py) via browser

Edit: OS is CentOS 5

I installed Python 2.5.5 and am trying to run some Python scripts via the browser.

Honestly, I have not worked with Python before. I attempted to load the python module into Apache, but it is already loaded and was skipped. I also confirmed that I can run python scripts from my command line if I make them executable.

However when I put "http://www.example.com/test.py" into my browser, it returns unparsed HTML as follows:

<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>500 Internal Server Error</title>
</head><body>
<h1>Internal Server Error</h1>
<p>The server encountered an internal error or
misconfiguration and was unable to complete
your request.</p>
<p>Please contact the server administrator,
 root@localhost and inform them of the time the error occurred,
and anything you might have done that may have
caused the error.</p>
<p>More information about this error may be available
in the server error log.</p>
<hr>
<address>Apache/2.2.3 (CentOS) Server at www.example.com Port 80</address>
</body></html>

I also have the following in my httpd.conf file.

AddHandler cgi-script .py

I am stumped as I do not know where to look from here. Does this ring a bell for anyone? Hopefully nothing too obvious that I am overlooking here...

Thank you in advance.

Edit: Found the following in the Apache error_log.

[Fri Feb 26 19:58:38 2010] [error] [client xxx.xxx.xxx.xxx] (13)Permission denied: exec of 'test.py' failed
[Fri Feb 26 19:58:38 2010] [error] [client xxx.xxx.xxx.xxx] Premature end of script headers: test.py
[Fri Feb 26 20:04:56 2010] [notice] mod_python: Creating 4 session mutexes based on 256 max processes and 0 max threads.
  • I've only ever used mod_python for a Trac install and they provide fairly explicit instructions for their application.

    However, while we were testing mod_python, I found this article helpful - you may too.

    Structure : Thanks - reading up on this.
    From Grhm
  • Apache will only execute files that are located in designated cgi-bin directories. Everything else is considered content that is passed to the viewer. Your root directory isn't, and shouldn't, be marked as such.

    Use the ScriptAlias <url-path> <directory> directive to set your cgi-bin directories. eg: ScriptAlias /cgi-bin/ /webroot/cgi-bin/. Copy your scripts there, then call http://www.example.com/cgi-bin/test.py. That should work for you.

    Structure : I configured the ScriptAlias as follows: ScriptAlias /cgi-bin/ /webroot/test/ Then I copied my script to that directory and called it from the browser, but the same error occured.
  • Error 13 from apache indicates a filesystem permissions problem.
    Is SElinux enabled? (what's the output of "ls -laZ test.py")

    I doubt it's a problem with ScriptAlias or AddHandler/ExecCGI (either of which will get apache to execute scripts) - since you're getting a 500 error and not the python source apache is clearly trying to execute the file.

    Structure : -rw-r--r-- user user test.py
    Structure : Above is the output. So far as I can tell, SElinux is installed but not enabled on this machine. (rental server, not exactly sure)
    quadruplebucky : Ah, chmod a+x that script. As an aside, cat /etc/sysconfig/selinux or sestatus will tell you SElinux status. If SElinux is not enabled then...
    Structure : SELINUX=disabled, also, I ran chmod a+x but to no avail. I am going to take a look Apache permissions, etc.
    quadruplebucky : Wait, I didn't notice this before... AddHandler python-program .py (not cgi-script) Also, try renaming your script - python has a builtin module "test".
    Structure : Thanks, I added the following three lines within a directive. AddHandler python-program .py, PythonHandler mytest, PythonDebug On... With these directives in my httpd.conf file I am able to run the test script. I am going to mark this as the answer given that it got me closest to the solution. Thank you for the assistance!

soft/fake raid in linux

original
I don't have much experience with Linux, but this is a good oportunity to learn

I'm mounting a simple database server and i would like to know if ubuntu server 9.10 (what do you guys recommend for a [begginer] server distribution ?) would work with a hardware raid-1 with this motherboard (there is no linux raid driver listed on vendors download page)

http://www.foxconnchannel.com/product/Motherboards/detail_spec.aspx?ID=en-us0000346

edit

After some tips i discovered that i call raid is actually a fakeraid, also found some articles about running linux on fakeraid using dmraid, and soft raid was suggested and since the performance/capabilities are almost the same, i need to help in another question

Whitch one is easier to setup and will automatically recover and/or boot with 1 disk on some failure

keep in mind that i'm no expert, so if something is very hard to configure i prefer to stand way, at least for now.

Thanks in advance

Arthur

  • You may want to familiarize yourself with the UbuntuHCL (Hardware Compatibility List). Specifically, the motherboard list and the storage controller list.

    arthurprs : Thanks for the quick reply, i found the board listed http://www.ubuntuhcl.org/browse/product+foxconn-a6vmx?id=1674 but no info :/
  • Normally you wouldn't need any drivers as hardware RAID controller would present the RAID device as a physical device to your operating system. So you'd see /dev/sda but in fact it is made of two or more disks.

    Mirroring parameters etc is all controlled from the RAID controller firmware, which you can access during the server POST boot (this is when you hit keys to get into BIOS, etc). Check with MB manual how to configure RAID device. Alternatively just pay attention to boot messages printed on the screen.

    With regards to your question for server OS, I'd recommend looking at CentOS, which is basically a recompiled RedHat Enterprise Linux. This is what "big guys" are using... :)

    arthurprs : So i shouldn't have any problems? This is great news :D!
    pulegium : yeah you should be ok as majority of the hardware RAID controllers operate that way. However having said that.I've downloaded the MB manual and haven't seen RAID configuration section. It does talk about standard MB BIOS stuff, and doesn't mention RAID at all. normally that's ok as the RAID controller utility is something different from the MB BIOS tool. Found this http://forum.egypt.com/enforum/hardware-networking-f198/problem-foxconn-raid-setup-19303.html which indicates that the hw RAId configuration does indeed exist (and you can get there wit Ctrl-F), so yeah, all should be fine :)
    Wesley 'Nonapeptide' : @pulegium concerning your statement: "Normally you wouldn't need any drivers" Unless I'm misunderstanding something, you need drivers to access the RAID controller itself to be able to see the volume that it creates from multiple drives. Thus when installing a new server OS you usually need some kind of disk with the RAID controller's drivers on it for the installation to proceed because in many cases it won't even see a hard drive unless it has some compatible RAID controller drivers already in the installation files.
    pulegium : @Wesley, what I meant here is that you normally don't need any additional specific drivers to enable the h/w controller. I wanted to say that the SAS drivers supplied by majority of Linux distributions would do the trick.
    From pulegium
  • If you mean the RAID controller built into the motherboard, I'd AVOID IT. It's not true hardware RAID.

    Motherboard RAID is regarded as the worst of RAIDs, as it is motherboard specific, there are several online instances of the motherboard just losing the RAID configuration and hosing volumes, and in the end, if you're trying to get RAID on the less expensive but capable side, use software RAID built into Linux.

    True hardware RAID is cached and will cost you in the wallet, but it costs more for a reason. Motherboard RAID often is just software RAID in firmware, only it can make the volume specific to that machine. Drive die or hardware issue? You can't necessarily recover the data by moving it to another system, since the motherboard may have done something odd to the formatting of the disk volume.

    If you're looking for hardware RAID with Linux, I've had good luck with 3Ware controllers, and if you don't want to spend the cash, use software RAID. Comes free with Linux.

    Bart Silverstrim : For more info, google "Fake RAID" to see what pops up.
    arthurprs : Thanks for the clarification, please see the edited question.
    Bart Silverstrim : If you run software RAID, I'll mention that (through this site) there have been questions about locating failed drives, since there can be many headaches when you get a logged failure of a drive but don't know which is which. You'll want to google a bit or search this site for questions about failed software raid drives, and see some tips about labeling your drives with serial numbers and software identification so you can tell which is which when you have a failure! :-) Hardware RAID will often have LED's and labeled cables that will tell you which is which without guessing.
  • I have always stayed away from onboard desktop board controllers (integrated server ones are a different ballpark), horror stories of incremental data corruption, shoddy drivers etc have had an effect. I would go for either an Adaptec (or similar) card that starts at around £100 or go for software RAID.

    If this is a small deployment, I would choose software RAID, it is pretty easy to manage and you have the flexibility of being able to mount half of a RAID mirror on virtually any Linux machine. Plus it is free, out of the box and relatively well battle tested. The main selling point for me is being able to manage it completely from inside the OS, no reboot required.

    In terms of OS, Ubuntu Server is pretty good and lightweight, however, I would recommend perhaps going for an LTS version. Alternatively, as suggested CentOS is a great server OS, it will have slightly older package sets but you do get a thoroughly tested product as a result.

    Wesley 'Nonapeptide' : In my opinion, integrated server controllers are just as poor and should be avoided when possible. ::casts contemptuous glance at an HP ML115::
    Frenchie : If you're giving consideration to Ubuntu LTS, also give consideration to Debian which has naturally longer release cycles and you can roll from one release to the next (which is something Ubuntu LTS releases have lacked but is promised for future releases)
    From Tom Werner
  • Don't use FOXCONN RAID with Linux!

    They are linux hostile. You should get rid of that motherboard and buy something better.

    Don't use Motherboard/Software RAID!

    Motherboard/software raid isn't very reliable, and you can easily end up with two bad copies instead of one good copy. It's very hard to recover from motherboard failures (unless you have more of the same motherboard), and it can be very hard to recover from disk failures (since disks tend not to be labelled well).

    Don't even use RAID!

    RAID is very slow, and doesn't protect against the problems that you think it does. It's not a replacement for a backup system, and it makes testing backups very hard, which means that in the wrong hands, your data is less safe in a RAID setup, than on a single disk.

    RAID should add a few hundred dollars to the cost of your server, and it can protect you from certain kinds of physical disk defects, and a (small) number of data-corruption problems. It won't protect you from:

    • Fire, Flood, or Lightning
    • Operating system errors
    • Sudden power loss
    • Stupidity

    A continuous backup system or a replicated+distributed storage system is always cheaper, and much more reliable. Depending on what you're doing, it may be harder to set up than a RAID system, but it is more obvious what you're protected against. That said, a proper RAID setup will include:

    • A standard disk layout
    • A battery-backup unit
    • Have lots of on-board memory
    • Lots of cooling
    • Regular testing

    RAID systems lacking these things will silently corrupt your data, and smash your hopes when you need it most: after catastrophic failure, and even the best RAID systems won't protect against the thing that actually happens.

    Wesley 'Nonapeptide' : I was with you on points one and two, but you lost me on point three. =)
    arthurprs : I'm confused, i need a reliable(most as possible)/cheap database server for a software, mostly because it will be running on another town, and i thought raid will keep the software running until i contact the owner to get a replacement.
    pulegium : yeah, #3 is not entirely true. RAID is not a backup, but still a very important protection mechanism. We've got some 2k servers running and disk failure is not an unusual thing. All servers have h/w RAID1, and although we need to shut them down to replace the disk, never ever have we had to reinstall the OS/app. Saves some time.
    arthurprs : @pulegium - Do you use soft raid on those?
    Zypher : You had a good post going ... until you rant against raid ... there is a reason that is stands for _REDUNDENT_ array of inexpensive|independant disks. It's for Redundency and HA not backups.
    geocar : @Zypher: Redundant, not redundent.
    geocar : @arthurprs - Raid isn't cheap. As I said, it adds a few hundred dollars to the cost of the server. If you don't have a bbu (battery backup), you can't safely enable disk cache, which makes it really slow.
    geocar : @pulegium - disk failures are rare in my network center. I burn in all disks for three days, then off three days, then on again before using and use continuous backups so even when I lose a disk, I don't lose any data. Switchover is done with heartbeat to the last clone, and machines get recycled after 5 years.
    arthurprs : @geocar - Thanks for the tip on the battery backup
    pulegium : @arthurprs nope, all h/w RAIDs there.
    pulegium : @geocar so when the disk goes you loose your server? until it's restored from backup? and that can happen anytime, like during the peak? we normally schedule server downtime out-of-peak hrs... makes our customers happy :)
    geocar : @pulegium: nope. there's no "off-peak" for web hosting. heartbeat brings the last clone up automatically (well, within 30 seconds). It's also easy to check because: I can simply test the clones periodically.
    Evan Anderson : I agree that motherboard RAID is junk. Software RAID isn't motherboard RAID, though. I just can't get behind the "don't use RAID" argument. Simple software RAID-1 for small installations protects against common hard disk failures. It's nice that disk failure for you are rare-- they've been common enough in my life to make RAID-1 worth it on many occasions. Implementing "continuous backup system or a replicated+distributed storage" is much more costly than adding a 2nd hard disk drive to a server computer in every case I can think of. I think you're living in a dream world.
    geocar : @Evan Anderson - You do not know what you're talking about. To get reliability with software RAID (or otherwise without battery-backup) means disabling the disk cache, and it needs your operating system to *carefully* order writes, or you can end up losing both copies. Buying two 1000$ servers is cheaper than one 3000$ server with the much faster disks (to overcome RAID's slowness).
    Evan Anderson : @geocar: I'll take software RAID-1 with a journaled filesystem on a server w/ a UPS over JBOD and this mystical "free" clustering / failover you speak of any day. Access time on RAID-1 is just as fast as JBOD, and RAID-10 blows JBOD out of the water w/ any number of spindles, so I don't buy your "RAID's slowness" argument. You're just making up numbers (re: "1000$" and "3000$") and living in a fantasy world where clustering and failover capability doesn't cost money in licensing and TCO (not to mention complicating systems and creating additional points of failure).
    From geocar
  • To just start screwing around with raid1, I used ubuntu 9.10 works wonderfully. I had 1 big problem that I'll repost here just in case you run into it, it was really killing me.

    Installing a new ubuntu setup with raid1 as part of the install is the easiest way to go. If you're trying to turn an existing drive into a raid array, it's a little harder.

    Basically you have to make new drive a raid drive, copy all your old drive's contents to it, then reformat/filesystem the old drive to be part of the array then tell raid to update it, and it will mirror over the data from the good raid drive.

    And here was my big problem: You have to add the grub config by hand, (to both drives) and what grub tells you is hd0 and hd1 can be different between when the machine is up and running and you run grub from the command line versus what grub will tell you if you drop to the grub command prompt on bootup.

    And it's the values it perceives at bootup that need to go into the grub config, not the ones you get from grub after the machine has booted.

    From Stu
  • I've had good luck with ZFS under Solaris. It's not Linux, but it's just as easy to install (and just as much hassle if your hardware isn't supported...) and tends to have fewer exploits, if you worry about such things. ZFS offers excellent performance and allows you to create an array using whatever disks are handy (all your disks don't have to be the same size or speed). All your standard OSS software is available (Apache, PostgreSQL, MySQL, PHP, Perl, Python, etc), and the standard desktop is Gnome, so there isn't a long learning curve.

    From TMN