Zero to Mux (with wiki)
-
So I'm going through the set up and I'm a little bit in over my head.
I had to reset the root password, which it made me do and I did, but now I get something that says:
root<dropletname>: ~#
Not really sure what I put there and I can't find anywhere that explains it.
-
That should just be your command-line prompt. It's where you'll start following instructions in the codeblocks of the little tutorial.
-
So, I think this how-to is hitting the obsolete end of things. Digital Ocean has completely changed their layout in the last year, and the thread makes it messy and hard to follow.
Would it be possible to update this to fix the glitchy bits and/or streamline so it's easier for a nublet to follow along? I'm really struggling this time, and even the first time I had set up Descent it was a kludgy mess that took quite some time to get fixed by a knowledgeable codey person.
-
I am following the instructions to install pennmush, but it is not ujnderstanding 'make update' or 'make install'. It is telling me no command is found.
-
@toreadorfool When you type the command ls in the DO shell, do you see a file listed called "Makefile" ?
-
So DigitalOcean removed the MediaWiki one-click droplet.
(Great, now what?)
• Create the Server
- Use the LAMP droplet.
- Log in as root. Change the password. Run:
mysql_secure_installation
apt-get update
- From this page, the following seems to install all the gcc/c++/etc. tools we'll need:
apt-get install build-essential libssl-dev libcurl4-gnutls-dev libexpat1-dev gettext unzip
- Create a new user for the game's account and escalate their privs.
Use these directions from DigitalOcean.
(Skip the SSH key stuff for it. See below.)
• Log Into the Game Account
- Create a public SSH key for it (mostly optional, do it anyway):
ssh-keygen -t rsa -C "user@email.com"
- Get you some PCRE (critical for Chime's install at least):
sudo apt-get install libpcre3 libpcre3-dev
• Get Us Some MediaWiki
(Follow along with the instructions on DigialOcean's site.)- [Step 1] The apt-get instructions are out of date. Run the following:
sudo apt-get install php-intl
sudo apt-get install php-gd
sudo apt-get install php-mbstring
(If you want to install texlive later, remember to run the apache2 restart line.) - [Step 2] Do not use the
curl
in their example as, it will not work. Instead:
a. Go here: https://www.mediawiki.org/wiki/Download
b. Right-click on the download to the latest release and select 'copy link'.
c. In the terminal window:curl -O <paste the link>
d. Continue with the 'tar' and 'mv' instructions.
e. You might want to 'rm' the installation leftovers from your game account's directory. - [Step 3] I'm using the game's account_name for both the database and user.
- [Step 4] You will absolutely want a wiki prefix. 'wiki_', for example. If you do anything else with your database, such as news/help integration or xp or a sensible stat system, you will want their table to have their own prefixes. (This is against good database design, but with the Mushlikes it's non-trivial to get around that.)
You're now ready to go back to the start of this thread and continue after making a new user.
I'm sorry that DigitalOcean took out a lot of the brain-dead-easy parts of this setup.
I hope that this helps.
Notifications:
- 09/12/17: The base TinyMUX install is not finding mysql.h. The Chime fork compiles fine. Looking for solution.
- 09/18/17: My bad. See two posts down.
-
@Thenomain said in Zero to Mux (with wiki):
Notifications:
09/12/17: The base TinyMUX install is not finding mysql.h. The Chime fork compiles fine. Looking for solution.
I'd have to see the exact error message this is returning. Not compiled mux in a while, but if it's the .depend that's broke it could be a bad entry for it.
You could try 'make depend' to enforce the dependency make and see if that fixes it.
Also is it the local MUX mysql.h or is it the system mysql.h?
If it's the system one (~/include/*/mysql.h) then you may need to specify MUX with a ./configure to specify the exact location of the header files. Again, been a while since I played with MUX.
-
The problem is with... well, my brain. A lot of my code issues are my not being detailed enough. I was missing '--with-mysql-include' in the original directions.
I seem to have it working with:
./configure --enable-stubslave --with-mysql-include=/usr/bin/mysql --with-mysql-libs=/usr/include/mysql --with-mysql-include=/usr/include/mysql --enable-inlinesql --enable-realitylvls
-
NECRO TIME
I installed Penn fresh on digitalocean today. Other than a few hiccups on what to install as far as missing packages, it ran pretty smoothly using CentOS.
==Installing Penn==
Here's what I did, and had zero issues.- Spun up the CentOS 7 droplet.
- Logged in and changed root password and set up a sudo user.
- Logged into the new non-root user.
- Installed: nano (sudo yum install nano), wget (sudo yum install wget), dev tools (sudo yum groupinstall "Development Tools"), OpenSSL (sudo yum install openssl-devel).
- FTPed in my version of PennMUSH (I didn't use wget but could have)
- gunziped and untar'ed it (gunzip file, tar -xvf file)
- cd pennmush
- ./configure to install
- cd game
- make update per normal
- make install
- edit mush.cnf
- ./restart
Everything was working in less than a half hour.
Thanks to people for the initial tutorial!
-
https://github.com/tekmunkey/ubuntuMUSH
I recently added a MediaWiki installation routine to the ubuConfig.sh
The whole thing installs your lamp server with both PHP and Python, MySQL and handles the baseline config for both a MUSH User and a Wiki user, and spits out what the new passwords are. And the passwords are randomly generated at runtime so you don't have to worry about an insecure/default PW installation.
There's also an installPennMUSH.sh and installTinyMUX.sh that compile in MySQL Support so all that's left is to plug your MySQL config info into your .conf file. The TMUX installer automatically activates REALMS + Reality Levels and the async MySQL module.
There's also an automated backup script that you just need to make some line-item mods to set up to your taste. The scripts are commented in a tutorial fashion.
-
And now to figure out how to make mediawiki on it. And then how to gulp import a mediawiki backup....
ETA: So, me knowing enough stuff to get installs going as a baseline, is it possible to drop a couple of different mediawiki instances in there on different directories or things that point? IE: I have hotbmush.com which I want to be a core wiki. But I may have some other site, othersite.com, as a wiki too. Is this feasible?
-
@bobotron You should, ideally, be able to set that sort of thing up with interwiki if you're using two different wikis on different hosts. It can be a pain in the ass in some respects, and I haven't hashed out all the fine points on it, but I was working on something like that before I pulled the 'fuck it' lever on trying to do something any time soon (since the asshole that more or less destroyed my desire to do anything in this hobby for the past year or so really hit critical asshole status a little while back and doesn't seem to give crap #1 about dragging that needle back from so far into the red where the evacuation sirens are going off and steam is shooting out of things that steam should not be shooting out of unless something's going to blow and take out the entire countryside).
The interwiki documentation is a little hazy, but I may be able to pull up some of the backups I have later in the week to talk you through how it generally works and some of the things you'd want to consider when doing it.
Keeping the same css and layout/design for both wikis is not necessary, but it's agood idea unless you're as anal-retentive as I am about how things flow and want to spend a lot of time making templates.
I was ideally setting something up that someone could later just yank the core materials over from the core wiki to theirs if they wanted to use the system, but it wasn't 'there yet' from my perspective on functionality, be warned. It sounds like you're trying to do something similar here, so feel free to nudge later in the week if you want, and I'll see what I can do.
-
@bobotron said in Zero to Mux (with wiki):
And now to figure out how to make mediawiki on it. And then how to gulp import a mediawiki backup....
ETA: So, me knowing enough stuff to get installs going as a baseline, is it possible to drop a couple of different mediawiki instances in there on different directories or things that point? IE: I have hotbmush.com which I want to be a core wiki. But I may have some other site, othersite.com, as a wiki too. Is this feasible?
Yeah, it's simple. My directory examples may not be 100% accurate unless you're using debian/ubuntu.
/etc/apache2/sites-available contains definitions for sites. Just look inside 000-default.conf there with a text editor to see how it should look. The important bit here is the directory specifier. For example, NEVER RUN YOUR WEBSITE OUT OF /var/www/html - ALWAYS SET UP A NON-ROOT AND NON-SUDO USER FOR SITES AND ESPECIALLY WIKIS. Those bits should be bold, italic, neon green on a black background.
The way it works is that the Apache service runs under the passwordless/loginless user named www-data with a group by the same name. So you create a new basic user with a home directory, typically named the same as the domain you're hosting there, then create a www directory inside that. Don't screw around with enabling public_html directories. That's chickenshit designed for multi-user/multi-tenant hosting scenarios.
So say your site is mydomain.com - log into the server as root or as a sudo-enabled user (preferably).
sudo adduser mydomain
This generates a user and a group at the same time, both named mydomain, and it generates a new /home/mydomain directory after you fill out the user's password and general info. You can just leave the general info blank, or put in other details if you like. You should really be using RSA keypairs instead of passwords to log in with any user at all, but you can set that up in /home/mydomain/.ssh/authorized_keys after your first login as that user.
sudo mkdir /home/mydomain/www
sudo chown www-data:www-data /home/mydomain/www
sudo chmod 775 /home/mydomain/wwwWhat the last line does is set Read Write eXecute for Owner and RWX for Group and RX only for everyone else (this is imperative if you want the webpages to be publically available).
sudo mkdir /home/mydomain/www-logs
sudo chown www-data:www-data /home/mydomain/www-logs
sudo chmod 770 /home/mydomain/wwwThis time the last line sets RWX for owner, RWX for group, and no read/no-write for anyone else. This allows Apache to write into there and it allows mydomain to read/clear logs.
sudo usermod -a -G www-data mydomain
This adds the user 'mydomain' to the 'www-data' group, giving them RWX access to the new /home/mydomain/www directory. If you're logged in as mydomain at the time then you have to log out and back in for the change to take effect.
You must not forget the -a or you'll dump all existing groups not in the list instead of adding one for that user.
sudo cp /etc/apache2/sites-available/000-default.conf /etc/apache2/sites-available/mydomain.conf
Now modify /etc/apache2/sites-available/mydomain.conf - set the data directory to /home/mydomain/www and set the logs directory (or the individual log files to point into) /home/mydomain/www-logs - assign whatever domain name you want to use in the host definition line.
sudo a2ensite mydomain
sudo /etc/init.d/apache2 restartOr use systemctl if you're more comfortable with that.
At this point every distro has its little quirks so you may have to tweak stuff to get the filesystem permissions just right. /var/logs/apache2/error.log will tell you if this is the case. You can google error codes if any pop up. Apache is insanely well documented both in the online manual and literally every forum. Even StackOverflow.com manages not to fuck up advice on Apache configs and troubleshooting, which is a bit of a miracle if you've ever used StackOverflow for anything more complex than making toast.
Now all you have to do is download your wiki and unzip into /home/mydomain/www or else /home/mydomain/www/wiki (if you want to have a regular website at mydomain.com and the wiki at mydomain.com/wiki)
You'll want/need to create a separate MySQL schema for every wiki, but you can pretty safely use the same username/password as long as it's only accessible from localhost or 127.0.0.1 and only has privileges to those wiki schemas, and as long as nobody else ever logs into the linux shell. You should be able to tell each individual MediaWiki what schema and user credentials to use when you run the initial setup through a webbrowser.
Be advised that literally every single time you upload new files or create new files in /home/mydomain/www you'll need to be root or a sudo user and run:
sudo chown -R www-data:www-data /home/mydomain/www
AND/OR (either works)
sudo chmod -R 775 /home/mydomain/wwwOtherwise the newly created files will be owned by the user mydomain and the group www-data so Apache may have issues serving the pages without the chmod or mydomain may have issues editing/overwriting data. I like for everything to be wholly owned by www-data:www-data (user and group) for uniformity. When you do the chown then you're doing chmod for the user mydomain's benefit (the first 7). When you don't do the chown then you're doing the chmod for the group www-data's benefit (the second 7). In both cases you're doing the chmod for everyone else (the 5 at the end).
Short of installing SAMBA and running all your uploads through that for new files (which I don't recommend for a public server - SSH is vastly preferable to SMB) I don't know of a good way to make the chown/chmod stuff happen automatically. Setting every single file in there to READ and EXECUTE may not be the best idea ever either, but is a quick and dirty way of doing it. If you value security over laziness, then you would run the recursive chown but then individually chmod every single new file by itself appropriately.
ie: HTML should never be set eXecute but PHP/PY files should.
sudo chmod 664 /home/mydomain/www/newHTMLFile.htm
sudo chmod 775 /home/mydomain/www/newScriptFile.php
sudo chmod 775 /home/mydomain/www/newScriptFile.pyUnix file permissions are bitflag values so the number 4 is Read and 2 is Write and 1 is eXecute. 664 makes htm files R+W for owner (the first 6) and R+W for group (the second 6) and Read-Only for everyone else (the 4 at the end).
775 then is RWX (owner) RWX (group) and Read+eXecute for everyone else.
It really is easier than it looks, once you get the hang of dicking around with linux systems.
-
@nemesis
Thanks. I'll try to digest this and... see how things go. -
It really is a lot simpler than it sounds when you put it all together. I'll try to break it down to bite-sized chunks.
First: We're talking about a lAMP stack. That's Linux with Apache, MySQL and Preprocessing. It's plain ironic or merely concidental that PHP, Python, and Perl all start with P and are the most popular Common Gateway Interface (aka Preprocessing engines) out there right now.
So let's start with Linux. You need to know your directory structure. This is instead a partition structure in advanced setups, but you cross directory/partition boundaries the same way so that's just technical minutiae.
/bin is all compiled binary applications/executables. It should always contain binary, non-human-readable files.
/etc is mostly config files, static variables for application runtimes, etc. Not everything here is human readable, but anything ending in .ini or .conf or .cfg should be.
/etc/init.d is exclusively for Daemon/Service configurations, meaning things that kick off at system boot and affect all users.
/var is all variable data storage modified by application runtimes, such as logs and storage for persistent data/options selected at runtime. What you find here may or may not be human readable, but anything ending in .txt or .log or .err and such should be.
Common commands:
chown (change owner) which changes the owner/group of a file/directory.
chown user:group /path/to/target changes just that target.
chown user:group /path/to/target -R assumes that target is a directory and recursively changes ownership of target plus all its contents (subdirs, files, files in subdirs, etc).
chmod permissionGroups /path/to/target changes permissions just for that target.
chmod permissionGroups /path/to/target -R assumes that target is a directory and recursively changes permissions of target plus all its contents (subdirs, files, files in subdirs, etc).
permissionGroups can be specified as something like:
chmod ugo+rwx /path/to/target where the U means UserWhoOwnsIt and g means GroupThatOwnsIt and o means OthersWithSystemAccess. The +rwx means you're giving Read+Write+Execute.
Using numerics is easier because you typically want user/group/others to all have different permission values and it's easier to set them all at once rather than make 3 different line entries of chmod u+rwx followed by chmod g+rw followed by chmod o+r.
Enter the numeric permissions.
4 is READ, 2 is WRITE, and 1 is EXECUTE. So if you want to give RWX you use 7 (4+2+1) and if you want to give RW you use 6 (4+2) and if you want just RX you use 5 (4+1) and if you want read only you use 4 - and if writeonly you use 2 - and if execute only you use 1.
Then you have to realize that this is stored in 4 4-bit values (2 octets) but only 3 quads are actually used (3 groups of 4 bits). The last quad isn't relevant to you. So starting from the left side, SKIP one. Then the 2nd one is USER and the 3rd one is GROUP and the 4th one is OTHER.
chmod 0777 /path/to/target gives RWX permissions to User+Group+Others.
chmod 777 /path/to/target is the exact same command, simply omitting the leading 0.
chmod 754 /path/to/target gives RWX to UserOwner, RX to GroupOwner, and Read-only to Others.
Using the -R switch gets recursive.
chmod 0754 is the exact same command.
EZ PZ right?
-
Moving on to Apache, the webserver. By itself, Apache doesn't really need MySQL or any Preprocessor at all. By itself Apache serves content to properly privileged requestors via HTTP and HTTPS requests.
The default site configuration in Apache looks like this (except of course without the wordwrap on long lines):
<VirtualHost *:80>
# The ServerName directive sets the request scheme, hostname and port that
# the server uses to identify itself. This is used when creating
# redirection URLs. In the context of virtual hosts, the ServerName
# specifies what hostname must appear in the request's Host: header to
# match this virtual host. For the default virtual host (this file) this
# value is not decisive as it is used as a last resort host regardless.
# However, you must set it for any further virtual host explicitly.
#
# UNCOMMENT AND EDIT THE NEXT LINE TO CONTAIN YOUR DOMAIN NAME #
#ServerName www.example.com
#
ServerAdmin webmaster@localhost
#
# CHANGE THE VALUE FOLLOWING DocumentRoot ON THE NEXT LINE TO RETARGET ANOTHER LOCAL DIRECTORY WITH THIS SITE DEFINITION #
#
DocumentRoot /var/www/html
#
# Available loglevels: trace8, ..., trace1, debug, info, notice, warn,
# error, crit, alert, emerg.
# It is also possible to configure the loglevel for particular
# modules, e.g.
#LogLevel info ssl:warn
#
# CHANGE THE PART THAT READS ${APACHE_LOG_DIR} TO POINT TO YOUR CUSTOM LOG LOCATION #
ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined
#
# For most configuration files from conf-available/, which are
# enabled or disabled at a global level, it is possible to
# include a line for only one particular virtual host. For example the
# following line enables the CGI configuration for this host only
# after it has been globally disabled with "a2disconf".
#Include conf-available/serve-cgi-bin.conf
Alias /log/ "/var/log/"
<Directory "/var/log/">
Options Indexes MultiViews FollowSymLinks
AllowOverride None
Order deny,allow
Deny from all
Allow from all
Require all granted
</Directory>
</VirtualHost>You simply set this file up, with a different name, hostname, and target directory of course, for each site you host from the local machine.
Once this is done, you use the command:
a2ensite <the part of the filename you selected that precedes the .conf>
This symlinks that definition from /var/apache2/sites-available into /var/apache2/sites-enabled - you could also create the symlink manually (and webhosts did up until a pretty recent version of Apache - not sure which one).
Now you call into:
/etc/init.d/apache2 restart
Which restarts the apache2 webserver, which just like a MUSH loads all its config data into memory at startup.
So to run multiple wikis from the same machine, as far as Apache is concerned, you just dump each wiki into a different directory and then create a new site definition for it.
The only trick to it, as noted above, is getting the file permissions right for Linux. Not only does Linux have to have permissions allowing Apache to read the files, but Linux also needs permissions allowing Apache to serve the files to OTHER users, ie anonymous website visitors.
Other than that, this is as easy as making hardboiled eggs. If you undercook it it'll still be damn tasty and if you overcook it you won't even notice. The only thing you can do wrong is flub the filesystem permissions, in which case you dropped an egg on the floor.
-
The only reason MySQL even comes into this discussion is because MediaWiki talks to it.
You have to create 1 schema for each wiki, in MySQL. Period.
mysql --user=root -p
Enter Password: (do that now)Your prompt changes to mysql>
mysql> CREATE DATABASE wiki00;
mysql> GRANT ALL PRIVILEGES ON wiki00.* to 'username0'@'localhost' IDENTIFIED BY 'someCleverPassword';
mysql> GRANT ALL PRIVILEGES ON wiki00.* to 'username0'@'127.0.0.1' IDENTIFIED BY 'someCleverPassword';
127.0.0.1 is a TCP loopback address. localhost is a named pipe. They're only synonymous or even vaguely similar when you're dicking around with your webbrowser and the browser behaves the same no matter which one you type in. That speaks to the amount of work the browser dev team put in.
MySQL, as a high security platform, is engineered to treat 'localhost' and '127.0.0.1' as the two different connection sources they really are.
So what you're doing in these MySQL queries is granting access to 'username0' when it connects from either source/protocol.
mysql> flush privileges;
What you just did is create a MySQL schema (also referred to as an individual database, but that's a horrible term to use because MySQL is the database and calling individual schemas 'databases' confuses newbies). After that you gave every imaginable permission (READ/WRITE/UPDATE/DELETE/ETC) to your new user. In newer MySQL versions, these commands alone should also create the specified user@site definition if it doesn't exist.
mysql> quit
This returns you to the linux shell.
-
So you're pretty much on your own to install the wiki of your choice, install/configure the Preprocessor (PHP or Python or Perl or whatevs), and then go through the wiki configuration to tell it which database schema to use and to make sure you got the user credentials right.
But learning your way around Linux is really the hardest part.
-
@nemesis said in Zero to Mux (with wiki):
But learning your way around Linux is really the hardest part.
Even though you are right, I don't see a real way out of having someone involved in running a MUSH to know more than the basics in all of Linux, some system administration and at least a little bit of coding.
You can follow a guide only for that long until something changes enough to break... something, or the guide itself isn't explicit enough for that one part, or you start out with anything different without even realizing (the FAQ is assuming Ubuntu but you're on CentOS and apt-get won't work), etc.
These threads absolutely help though.
-
I'm using CentOS7 as a note. So far, no luck. I tried to follow the tutorial on digitalocean's site for it for CentOS and I can get through most of the steps you've provided, which I can tell are just variant commands between CentOS and those.
I get to the end after I've made some symbolic links and reset Apache and it fails out.
I'm going to continue to play around though, particularly since right now I'm trying to make it so a particular subdomain, but not the whole domain (yet) will point to the site.